Need: Student learning outcomes are known to be associated with student engagement. At large-scale, instructors find it increasingly difficult to gain insight into how and whether students are engaged in the class. Consequently, students are expected to master fundamental concepts outside the classroom, where it is not possible for faculty to observe the students learning and to make assessments about how to intervene productively on the students’ behalf. Guiding Question: The general hypothesis of our work is that student annotation and discussion in the margins of text can be used both to directly increase student engagement and to provide valuable feedback to the faculty that can help them revise their course and content to increase engagement. In particular, we will build upon NotaBene (NB), a system that allows students to discuss online course content (PDFs, websites, YouTube videos) in the margins of those content sources. Outcomes: In our work we develop a new approach for confusion detection in online forums that is based on harnessing the power of students’ self-reported affective states (reported using a set of pre-defined hashtags). We trained a machine learning (ML) model for detecting whether students’ posts exhibit confusion in the absence of hashtags. In another study, we show that a combination of automatic classification and visualization of cognitive engagement anchored in the text can give teachers—and not only researchers—valuable insight into their students’ thinking, suggesting ways to modify their lectures and their course readings to improve learning. Very recent work tests this idea by using student reported affect (as reported in forum posts in NB) as a data source for guiding revisions to the textbook and demonstrate through experiment the students receiving the revised version of the text show lower levels of confusion and express more curiosity than those reading the control (original) text. We have also developed, through multiple design iterations, an improved set of emoji-hashtag pairs that serve as a fixed symbolic vocabulary for affect reporting. Finally, we have developed a supervised ML model for selecting high-quality forum posts from a previous course instance to serve as discussion seeds in subsequent course iterations. We found that the group of students that received posts selected from the seeding model in their reading assignments generated more discussion than a control group in the course that did not get seeded posts. Furthermore, students who received seeds selected by the ML-based model showed higher levels of engagement, as well as greater learning gains, than those who received seeds ranked by length of discussion. Broader Impacts: The core impact of our innovations are to give students and instructors greater insight into how, not just what, students are learning from online content. Ultimately, the ability to analysis and learn from students’ discussion and learning behaviors in online learning environment will help create interventions (like automated forum seeding and data-driven course content revision) that will improve the quality of teaching and learning.
Michele Igo, UC Davis; Hyunsoo Kim, UC Davis; David Karger, MIT; Jumana Almahmoud, MIT, Kobi Gal, Ben-Gurion University; Shay Geller, Ben-Gurion University; Einat Shusterman, Ben-Gurion University; Shay Geller, Ben-Gurion University, Ariel Blobstein, Ben-Gurion University