Skip to content

Department of Education

Viewing archives for Academic Staff

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. & Meadows, M.L. (2017) An investigation of construct relevant and irrelevant features of mathematics problem-solving questions using comparative judgement and Kelly’s Repertory Grid, Research in Mathematics Education, 19, 2, 112-129.

Editorials

Newton, P. & Meadows, M.L. (2011) Marking quality within test and examination systems, Assessment in Education: Principles, Policy and Practice, 18, 213-216.

Published reports

Baird, J., Caro, D., Elliott, V., El Masri, Y., Ingram, J. Isaacs, T., Pinot de Moira, A., Randhawa, A., Stobart, G., Meadows, M., Morin, C., & Taylor, R. (2019) Examination Reform: the impact of linear and modular examinations at GCSE, Joint OUCEA and Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Findings from our call for evidence, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Teacher involvement in developing exam papers: Student focus groups and parent interviews, Ofqual Report.

He, Q., Meadows, M., & Black, B. (2018) Statistical techniques for studying anomaly in test results: A review of the literature, Ofqual Report.

Therese N. Hopfenbeck is professor of Educational Assessment, Director of the Oxford University Centre for Educational Assessment and fellow at Kellogg College. She is the elected Vice-President of The Association for Educational Assessment-Europe and Lead Editor of the journal Assessment in Education, Principle, Policy and Practice.

Dr Hopfenbeck’s research agenda focuses upon bridging research on self-regulation and classroom-based assessment and making sense of international large-scale studies in education. In collaboration with Professor Nancy Perry, University of British Columbia, she is currently leading an international network of researchers disseminating classroom-based research, funded by Social Sciences and Humanities Research Council of Canada (2020 – 2021). She is also currently Principal Investigator for two research projects funded by IB, on critical thinking in PYP schools internationally and evaluation of education reforms in Kent, UK (2020 – 2021). In 2020, she led the research on Critical Thinking in the Diploma Program in Australia, England and Norway (https://ibo.org/research/outcomes-research/diploma-studies/critical-thinking-skills-of-dp-students/). Dr Hopfenbeck is also Principal Investigator for the PISA 2022 study in England, Northern Ireland and Wales, in collaboration with Pearson UK (2018 – 2023). She was the Research Manager of PIRLS 2016, funded by The Department of Education, UK.gov, and was Principal Investigator of a major ESRC-DFID research study, Assessment for Learning in Africa (ES/N010515/1) (2016 – 2019). Since coming to Oxford in 2012, she has been the recipient of funding from ESRC-DFID, OECD, The Norwegian Research Council, Education Endowment Foundation, State Examinations Commissions Ireland, Jacob Foundation and the International Baccalaureate totalling more than £2 mill in addition to a single grant of £4 mill in collaboration with SLATE: Centre for the Science of Learning & Technology at the University of Bergen, Norway. Prior to her appointment at Oxford, she worked as a post-doctoral researcher at the University of Oslo’s research group for Measurement and Evaluation of Student Achievement at the Unit for Quantitative Analysis of Education (2010 – 2011).

She is Adjunct Professor of the Norwegian University of Science and Technology (NTNU), member of the Visiting Panel for Research at the Educational Testing Service (ETS) in Princeton, chair of Ofqual Research Advisory Board in UK (2021 – 2023) and expert member of the PISA 2022 Questionnaire Framework group, appointed by ETS and OECD (2014 – 2023). She has advised on the implementation of formative assessment programs in India, South Africa, Norway and the Emirates and carried out policy work for UNESCO/OECD and the Norwegian Ministry of Education Norway.

Therese has a presence on LinkedIn, ResearchGate, Academia.edu and Twitter: @TNHopfenbeck.

She welcomes students in the following areas

· Self-regulated learning/Metacognition

· Assessment for Learning/formative assessment

· International large-scale assessment (PIRLS, PISA)

· Classroom-based Assessment

· Implementation and evaluation of Assessment reforms

Dr Juliet Scott-Barrett is a Research Officer at the Oxford University Centre for Educational Assessment.

Juliet is currently working with Professor Therese N. Hopfenbeck, Dr Tracey Denton-CalabreseDr Samantha-Kaye Johnston and Dr Joshua McGrane on a research study funded by the Jacobs Foundation on exploring, evaluating and facilitating creativity and curiosity in the classroom. This research is being conducted in collaboration with the Australian Council for Educational Research and the International Baccalaureate.

In her previous post, she was a Project Associate at the Cambridge Centre for Teaching and Learning where she explored inclusive practices in Higher Education, and worked on cycles of Participatory Action Research identifying and addressing barriers to equal and accessible academic opportunities for all.

Juliet completed her doctoral studies at the University of Edinburgh, where she worked with Lego, voice-recorders and photography to explore children’s perspectives on school environments, communication and play. She also  conducted a study interviewing researchers about conducting collaborative and meaningful research with autistic children and young people. She originally trained as a Secondary School teacher and has a PGCE and Masters in Education from the University of Cambridge.

Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).

Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.

Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.

She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.

During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.

Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.

David Andrich is Chapple Professor, Graduate School of Education, The University of Western Australia.

He obtained a bachelor degree in Mathematics and his Masters degree in Education from The University of Western Australia and his PhD from the University of Chicago, for which he was awarded the Susan Colver Rosenberger prize for the best research thesis in the Division of the Social Sciences. He returned to The University of Western Australia, and in 1985 was appointed Professor of Education at Murdoch University, also in Western Australia. In 2007 he returned to The University of Western Australia as the Chapple Professor of Education. In 1977 he spent 6 months as a Research Fellow at the Danish Institute for Educational Research working with Georg Rasch and he has been a Visiting Professor at the University of Trento in Italy for two periods. He has held major research grants from the Australian Research Council continuously since 1985 and has conducted commissioned government research at both the national and state levels. In 1990, he was elected Fellow of the Academy of Social Sciences of Australia for his contributions to measurement in the social sciences. He is especially known for his work in modern test theory, and in particular Rasch models for measurement, ranging in topics from the philosophy of measurement, through model exposition and interpretation, to software development. He has published in Educational, Psychological, Sociological and Statistical journals. He is the author of Rasch Models for Measurement (Sage) and coauthor of the software package Rasch Unidimensional Measurement Models (RUMMLab).

Research

David Andrich’s current research in applying Rasch models for measurement is has two strands.

The first involves articulating a research and assessment paradigm that is different from the traditional in which statistical models are applied. In the traditional paradigm, the case for choosing any model to summarise data is that it fits the data at hand; in contrast, in applying the paradigm of Rasch models, the case for these models is that if the data fit the model, then, within a frame of reference, they provide invariance of comparisons of persons with respect to items, and vice versa. Then any misfit between the data and the chosen Rasch model is seen as an anomaly that needs to be explained by qualitatively by reference to the theory behind the construction of the instrument, and the operational aspects of its application. He argues that this approach improves the quality of social measurement, including in education, psychology, sociology, economics and in health outcomes. The second area of research is further articulating the implications of the Rasch models and development of complementary software, to better understand a range of anomalies, for example, how to identify guessing in multiple choice items, how to identify and handle response dependence between items, and mutldimensionality. He has also recently published the paper which shows how person location estimates can be obtained independently of all test parameters using the general unidimensional Rasch model in the case where each person has sat a multiple of tests, for example for selection for university entry. Andrich, D. (2010) Sufficiency and conditional estimation of person parameters in the polytomous Rasch model. Psychometrika. (Online First Publication).

Professor Stobart is an Honorary Research Fellow at the Oxford University Centre for Educational Assessment (OUCEA) and Emeritus Professor of Education at the Institute of Education, University College London.

He has worked in education as a teacher, psychologist, policy researcher and academic. His expertise is in assessment, with much of his recent work focusing on Assessment for Learning.

After teaching for eight years in secondary schools in Africa and Inner London he retrained, and worked, as an Educational Psychologist. This led to a Fulbright Scholarship in the USA, where he gained a PhD for research into the integration of special needs students in mainstream classrooms.

Returning to the UK he worked as an assessment researcher for 20 years, firstly with an exam board and then with government agencies. These posts led to wide experience with assessment policy and the development of national qualifications and assessments.

His move to the University of London Institute of Education provided the opportunity to further develop his work on the formative role of assessment. As a founder member of the Assessment Reform Group he has worked for over twenty years on developing Assessment for Learning, an approach which now has international recognition. He continues to work on international projects in this area.

His Testing Times: The uses and abuses of assessment (Routledge, 2008) focused on the impact of assessment, while his current focus is on how expertise develops and the implications for classroom teaching and learning. His book on this is The Expert Learner: Challenging the myth of ability (2014, OUP/McGraw-Hill).
Much of his current professional work involves working with teachers on classroom teaching and learning – and the role formative assessment plays in this.

Recent conference presentations
  • Sept. 2019, Effective Learning – What matters most? Britanico Conference, Lima Peru.
  • July 2019, Schools matter, but they don’t make a difference’ – examining the genetic claims of Robert Plomin LondonEd Research Conference for Schools
  • January 2019, Developing effective assessment for learning, Kompatense Norge Conference, Norway.
  • Nov. 2018, Teaching, Learning and Assessing for Today and Tomorrow, Aga Khan University, Pakistan.
  • Nov. 2017, Can examinations ever be fair? Investigating equal opportunities, meritocracy and validity.43rd IAEA Conference Batumi, Georgia
  • June 2017, Expert learning and teaching, IEA Assessment Conference, Adelaide.
Selected publications
  • ADIE, L., STOBART, G. & CUMMING, J. (in press) The construction of the teacher as expert assessor, Asia-Pacific Journal of Teacher Education.
  • BOYD, E., GREEN, A., HOPFENBECK, T. & STOBART, G (2019) Effective Feedback: The key to successful Assessment for Learning, Oxford University Press
  • GOLDSTEIN, H., MOSS, G., SAMMONS, P., SINOTT, G. & STOBART, G. (2018) A baseline without basis: The validity and utility of the proposed reception baseline assessment in England, London: British Educational Research Association.
  • STOBART, G. (2018) Becoming proficient: An alternative perspective on the role of feedback. The Cambridge handbook of instructional feedback, Eds A.L. Lipnevich and J.K. Smith, Cambridge, Cambridge University Press, 29-51.
  • BAIRD, J., ANDRICH, D., HOPFENBECK, T.N. & STOBART, G. (2017) Assessment and Learning: fields apart? Assessment in Education: Principles, Policy and Practice, 24, 1, 317-350.
  • STOBART, G (2016), Assessment and Learner Identity, Encyclopedia of Educational Philosophy and Theory, Dordrecht, Springer Press, 1-6.
  • T.N. HOPFENBECK & G. STOBART (Eds) (2015) Assessment for Learning: Lessons learned from large-scale evaluations of implementations. Special Issue of Assessment in Education: Principles, Policy and Practice, 22, 1, 1-177.
  • STOBART, G. (2014) What is 21st Century Learning – and what part does classroom assessment play? Assessment and Learning 3, 1-14; Hong Kong Education Bureau.
  • STOBART, G. (2014) The Expert Learner; Challenging the myth of ability, Maidenhead, McGraw-Hill/Open University Press.
  • EGGEN, T.J.H.M. & STOBART, G. (Eds) (2014) High Stakes Testing in Education: Value, fairness and consequences, London, Routledge.
  • STOBART, G. & HOPFENBECK, T.N. (2014) Assessment for Learning and formative assessment, in BAIRD, J-A., HOPFENBECK, T., NEWTON, P., STOBART, G. & STEEN-UTHEIM, A.T. State of the Field Review of Assessment and Learning, Norwegian Knowledge Centre for Education study 13/4697.
  • STOBART, G. & EGGEN, T. (2012) High-stakes testing – value, fairness and consequences, Assessment in Education, 19,1, 1-6.
  • STOBART, G. (2012) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.
  • BAIRD, J., ISAACS, T., JOHNSON, S., STOBART, G., YU, G., SPRAGUE, T. AND DAUGHERTY, R.
    (2011) Policy Effects of PISA.
  • BAIRD, J., ELWOOD, J., DUFFY, G., FEILER, A., O’BOYLE, A., ROSE, J. & STOBART, G. (2011) 14–19 Centre Research Study: Educational Reforms in Schools and Colleges in England Annual Report. London: QCDA.
  • STOBART, G. (2011) Validity in formative assessment, in J.GARDNER (Ed.) Assessment and Learning, 2nd Edition, London, Sage.

Michelle is Associate Professor in Educational Assessment and Course Director for the MSc in Educational Assessment.

Michelle has a PhD in psychology from the University of Manchester. Her research interests span most elements of high stakes assessments and qualifications. For example, she has conducted research into marking reliability, marker recruitment and retention, the maintenance of standards both between and within qualifications, construct validity, malpractice, and the wash back of high stakes assessments on teaching and learning. Michelle has published work in academic books and journals, and has presented at national and international conferences.

Michelle is also a member of the Assessment Committee of the Institute of Directors. The committee is responsible for quality assuring the Institute’s qualifications in business leadership.

Prior to September 2021 Michelle was Deputy Chief Regulator and Executive Director for Strategy, Risk and Research at Ofqual. She was responsible for research to support the development of high stakes assessment; the successful delivery of reliable and valid public examinations; qualification design to stimulate high quality teaching and learning; and the regulation of the maintenance of examination standards.

Before May 2014 Michelle was Director of AQA’s Centre for Education Research and Practice and was a member of AQA’s Executive Board, responsible for ensuring that AQA’s strategy, products and policies were grounded in a robust research evidence base.

 

Publications

Journal articles

He, Q., Meadows, M.L. & Black, B. (2020) An introduction to statistical techniques used for detecting anomaly in test results, Research Papers in Education, DOI: 10.1080/02671522.2020.1812108

Cuff, B.M., Meadows, M.L. & Black, B. (2019) An investigation into the Sawtooth Effect in secondary school assessments in England, Assessment in Education: Principles, Policy & Practice, 26, 3, 321-339.

Pinot de Moira, A., Meadows, M.L. & Baird, J-A. (2019) The SES equity gap and the reform from modular to linear GCSE mathematics, British Educational Research Journal, https://doi.org/10.1002/berj.3585.

He, Q., Stockford, I. & Meadows, M.L. (2018) Inter-subject comparability of examination standards in GCSE and GCE in England, Oxford Review of Education, 44, 4, 494-513.

Holmes, S.D., Meadows, M.L., Stockford, I. & He, Q. (2018) Investigating the Comparability of Examination Difficulty Using Comparative Judgement and Rasch Modelling, International Journal of Testing, 18, 4, 366-391.

Meadows, M.L. & Black, B. (2018) Teachers’ experience of and attitudes toward activities to maximise qualification results in England, Oxford Review of Education, 44, 5, 563-580.

Baird, J., Meadows, M.L., Leckie, G. & Caro, D. (2017) Rater accuracy and training group effects in Expert- and Supervisor-based monitoring systems, Assessment in Education: Principles, Policy & Practice, 24, 1, 44 – 59. 

 Holmes, S.D., He, Q. &