Scalable, Effective Assessment through Computer-based Testing

  Assessment is an important part of education, the part that provides students feedback on the extent of their learning and, for some students, their primary source of motivation to engage in a course. Assessment, however, is one area that historically has been difficult to scale in larger enrollment classes. In particular, running exams in large classes presents challenges in dealing with student conflicts, printing exams, proctoring (often in many rooms), and the time and effort of grading and the resulting delay in getting feedback to students. As a result, many large courses have historically run as few exams as possible and relied heavily on multiple-choice exams, which can be limited in the kinds of skills that they can assess.

With out Computer-based Testing Facility (CBTF), we're trying to revolutionize running exams in large STEM courses to make them simultaneously better for both students and faculty and their course staff. Four concepts are key to achieving this goal. First, by running the exams on computers, we can write complex, authentic (e.g., numeric, programming, graphical, design) questions that are auto-gradable, allowing us to test a broad set of learning objectives with minimal grading time and providing students immediate feedback. Second, we write question generators that use randomness to produce a collection of problems, allowing us to give each student different questions and permitting the problem generators to be used semester after semester. Third, because each student has a unique exam, we allow students to schedule their exams at a time convenient to them within a specified day range, providing students flexibility and avoiding the need to manage conflict exams. Finally, because exam scheduling and proctoring is completely handled by the CBTF, once faculty have their exam content, it is no more effort to run more frequent, smaller exams that reduce student anxiety, as well as offering second-chance exams to reduce failure rates by allowing struggling students an opportunity to review and demonstrate mastery of concepts that they missed on an exam.

The vision for and operation of a CBTF
  The idea of a Computer-based Testing Facility as the primary solution for summative assessment in large-enrollment college courses is novel. As such, we've been documenting our experiences and best practices in an effort to convince others to consider adopting the practice and to aid them in doing so.

Making Testing Less Trying: Lessons Learned from Operating a Computer-Based Testing Facility Craig Zilles, Matthew West, David Mussulman and Tim Bretl (FIE 2018)

Student and Instructor Experiences with a Computer-Based Testing Facility
Craig Zilles, Matthew West, David Mussulman, and Carleen Sacris (EDULEARN 2018)

Computerized Testing: A Vision and Initial Experiences
Craig Zilles, Robert Deloatch, Jacob Bailey, Bhuwan Khattar, Wade Fagen, Cinda Heeren, David Mussulman, and Matthew West (ASEE 2015)


Learning Analytics
  The CBTF is a phenomenal source of data that we can use to try to understand student behavior and learning in general. We've been using statistical and machine learning tools to make sense of the enormous streams of data that the CBTF produces.

How Much Randomization is Needed to Deter Collaborative Cheating on Asynchronous Exams?
Binglin Chen, Matthew West, Craig Zilles (Learning@Scale 2018)

Do Performance Trends Suggest Wide-spread Collaborative Cheating on Asynchronous Exams?
Binglin Chen, Matthew West, Craig Zilles (Learning@Scale 2017)


Student Scheduling Preference Modeling
  Through our work, we've observed that students don't sign-up to exam time slots in a uniform manner. We've documented their behavior and built models of it that we can use for capacity planning when assigning exams to day ranges.

Student Behavior in Selecting an Exam Time in a Computer-Based Testing Facility
Craig Zilles, Matthew West, David Mussulman (ASEE 2016)

Modeling Student Scheduling Preferences in a Computer-Based Testing Facility
Matthew West, Craig Zilles (Learning@Scale, 2016)