Components of Assessments and Grading at Scale

Abstract: One of the major criticisms of efforts towards offering education at scale has been the Trap of Routine Assessment, the risk that student assessment will suffer from becoming excessively simplified in service of automation and scale. In this research, we examine the ways that students in an at-scale graduate program in computer science were assessed during their degrees. The program in question has scaled to over 10,000 students in only a few years, but awards a traditional Master’s degree, providing the opportunity to investigate whether scale was achieved by transitioning to more routine assessment or by bringing scale to traditional strategies. To do this, we investigate the syllabi of 52 classes offered through the program to identify the types of assessments used, and we survey teaching teams for their approaches to evaluating these assessments. We merge this data with historical enrollment data to gain an overall summary of the kinds of assessments and evaluations received during their degrees. We ultimately find the program’s scale has been managed by scaling up traditional assessment and evaluation strategies as the majority of grades are generated by human teaching teams based on projects and homeworks, with a relatively smaller portion generated exclusively by automated evaluation of exams.

Full Paper

The full paper “Components of Assessments and Grading at Scale” can be found here.