Computer Science Theory studies various models of computation such as finite automata, regular expressions, context-free grammars, and push-down automata. The goal is to understand the amount of computational power needed to solve various types of problems, or to mathematically prove that a problem is unsolvable with the given resources. As a first step towards this understanding, students construct automata or grammars for specific problems. Due to its abstract nature, computing theory is often perceived as challenging by students, and verifying the correctness of a student’s automaton or grammar is time-consuming for instructors. Aiming to provide benefits to both students and instructors, we designed an automated feedback tool for assignments where students construct automata or grammars. Our tool, built as an extension to the widely popular JFLAP software, determines if a submission is correct, and for incorrect submissions it provides a “witness” string demonstrating the incorrectness (that is, the extension provides an input on which the submission fails). Our research was guided by three research questions: RQ1. What is the effect of using the extension on students’ perceptions of their learning and on their behavior when solving homework questions? RQ2. What is the effect of using the extension on students’ performance when solving the homework questions? RQ3. How do instructors benefit from the extension?We studied the usage and benefits of our tool in two terms, Fall 2019 and Spring 2021. Each term, students in one section of the Introduction to Computer Science Theory course were required to use our tool for sample homework questions targeting DFAs, NFAs, RegExs, CFGs, and PDAs. In Fall 2019, this was a regular section of the course. We also collected comparison data from another section that did not use our tool but had the same instructor and homework assignments. In Spring 2021, a smaller honors section provided the perspective from this demographic. Overall, students who used the tool reported that it helped them to not only solve the homework questions (and they performed better than the comparison group) but also to better understand the underlying theory concept. They were engaged with the tool: almost all persisted with their attempts until their submission was correct despite not being able to random walk to a solution. This indicates that witness feedback, a succinct explanation of incorrectness, is effective. Additionally, it assisted instructors with assignment grading. Our main broader impact is the tool itself, which provides benefits to both students and instructors, and is available for use by other institutions.
Ivona Bezáková, Rochester Institute of Technology, Rochester, NY; Kimberly Fluet, University of Rochester, Rochester, NY; Edith Hemaspaandra, Rochester Institute of Technology, Rochester, NY; Hannah Miller, Rochester Institute of Technology, Rochester, NY; David E. Narváez, University of Rochester, Rochester, NY