Skip to content

Department of Education

Viewing archives for Convener

When humans score responses to open-ended test items, the process tends to be slow and expensive and potentially introduces measurement error due to the subjective nature of decision making.

Automated scoring is an application of computer technologies developed to address these challenges by predicting human scores based on item structure algorithms or response features. Unfortunately, there is no guarantee that a particular attempt to develop an automated scoring model will be successful because features of the assessment design may reduce the automated “scorability” of examinee responses to test items. Our presentation begins with an overview of automated scoring and scorability. We then describe two applications of automated scoring: Pearson’s Intelligent Essay Assessor (IEA) and Math Reasoning Engine (MRE). We continue by illustrating the concept of automated scorability, identifying features of prompts and scoring rubrics that we have found to either improve or reduce the chances of being able to model human scores on test items. Finally, we provide guidelines for item developers and rubric writers that facilitate the automated scorability of examinee responses.

Speakers

Lisa Eggers-Robertson is responsible for product planning and execution of the automated math open-ended response scoring service. She has served as program manager, supporting the College Board’s Accuplacer and SAT programs and other state program as well as a product manager for an interim classroom assessment product. She has spent the last 9 years of her career within the automated scoring team. She is certified as a Project Management Professional (PMP) by the Program Management Institute (PMI) and holds a BBA in Management of Information Systems from the University of Iowa.

Gregory M. Jacobs serves as technical lead and senior member of Pearson’s automated scoring team that is responsible for product planning and execution of the Intelligent Essay Assessor™ scoring service. He works with scoring staff and assessment program teams to provide leadership and execution of the automated scoring process and has over 3 years of industry experience building machine learning models, with an emphasis on Natural Language Processing. He also holds a Juris Doctor from Catholic University and has over a decade of legal experience as a commercial litigator handling complex contractual language disputes.

Edward W. Wolfe is responsible for development and delivery of Pearson’s automated writing and math scoring services, including the Intelligent Essay Assessor™, Continuous Flow, and Math Reasoning Engine. He works with scoring staff and assessment program teams to provide leadership and oversight of the automated scoring process on all programs. Dr. Wolfe holds a PhD in educational psychology from the University of California, Berkeley, and has authored nearly 100 peer-reviewed journal articles and book chapters regarding human and automated scoring and applied psychometrics

Ethical dilemmas confront all educational researchers, and the literature is replete with example and with administrative ways of processing them.  This seminar focuses upon real-life and inclusive examples of dilemmas, looking at how they might be resolved.  It draws upon different perspectives, both methodologically and theoretical. Presentations are based upon a recent book, Ethical Dilemmas in Education: Considering Learning Contexts in Practice, with an emphasis upon

  • Researcher reflexivity, wellbeing and ethical safety
  • Starting with self
  • Jolts in the margins: probing the ethical dimensions of post-paradigms in educational research
  • The hidden lives of ethics

Following brief presentations on each of the above topics, there will be a discussion on ethical dilemmas encountered by the researchers present, both as presenters and in the audience.