Panel: Computer-Based Testing

View on Snap!Con

Presented By: Gurkaran Singh Goindi, Irene Ortega, Dan Garcia, Maxson Yang, Shein Lin Phyo, Eduardo Huerta, Benjamin Belfus, Qitian Liao, Shannon Hearn, Alyssa Sugarman, Jonas Ong, Bojin Yao


Abstract:

Formative assessment refers to in-process evaluations of student comprehension, learning needs, and academic progress: lab work, in-class worksheets, homework assignments. However, feedback time for homework grading can be long depending on instructor availability, and lab time is limited, especially given the rising demand for computing/STEM education. Summative assessment refers to an instantaneous measurement of what a student has mastered, usually consisting of proctored quizzes and exams whose administration times are generally fixed, presenting challenges for students with other family or financial obligations. Because exams are complex to prepare and administer, instructors usually create few of them, so a small number of summative assessments often determine a large part of the student’s final grade.
In STEM higher education, courses conduct both formative and summative assessments in a manner that thwarts mastery learning and magnifies equity gaps in student preparation. In short, this is “constant time, variable learning”—course pacing is the same for all students regardless of learning speed, all students receive a small number of “one-shot” summative assessments at the same time, and not all will master the material (or even pass).
In contrast, mastery learning is “constant learning over variable time”—some students may take longer than others to reach the same level of mastery, but they can eventually do so with increased practice and instructor support. The challenge with implementing mastery learning is that increased practice in STEM courses means solving more practice problems, but developing good practice problems requires instructor effort, to say nothing of giving the students feedback on their performance on those problems.

To address these challenges, Dan Garcia’s lab at UC Berkeley is developing paradigm-based question generators (PQGs) to enable both formative-assessment mastery learning and summative-assessment mastery learning for “The Beauty and Joy of Computing.” Students will have as much practice and time with Snap! concepts as necessary to achieve mastery rather than a rigid schedule that may result in variable mastery. Our hypothesis for this project is that PQGs will result in higher retention, stronger learning outcomes, higher participation in computing for underrepresented and minority students, and more effective use of instructor time to identify and assist struggling students.