By Cameron Miranda-Radbord and Maia De Caro
How does student feedback translate into change at Student Life? Attaining feedback is a crucial component of the assessment process. It helps identify gaps in operations and programming, enabling targeted improvements. Feedback can be gathered through various methods, such as surveys, focus groups, reflections, and conversations. In our focus on assessment, we examined the Signature Program Assessments (SPAs), a unit-level assessment mechanism designed to help programs identify gaps and implement improvements. Once feedback is collected, it can be analyzed and utilized in multiple ways to provide insights into what is and isn’t working. Furthermore, feedback is integral to the mission and goals of Student Life, as it helps guide units in making progress toward the objectives outlined in the Strategic Plan, which emphasizes student success and development.
To explore this question, Maia focused on Starting Point, an introduction to the University for first-year students, while Cameron studied Accessibility Services, which supports students with disabilities. As we examined these two units, we learned that they gathered, interpreted, and utilized student feedback in different ways, especially when making program decisions for improvement. Accessibility Services explicitly used survey responses to develop new programs aligned with what students said they wanted. In contrast, Starting Point used feedback to help them understand what type of changes needed to be made—operational rather than strategic.
For Accessibility Services – the primary means of evaluation was a yearly student survey. Based on an interview with administration, the most influential part of the survey seemed to be a section where students were given a list of programming suggestions, including “More instruction-based programming” and “More informal connections with other students registered”. “More informal connections…” received the most interest from students. Consequently, according to administrators at Accessibility Services, the bulk of the changes arising from the report focused on increased peer support, including “more peer advisors, more peer mentors and peer facilitators” and “Talking To New People, a program for students to practice relationship-building tools and strategies to make connections.” Other sections, which asked students to evaluate the quality of Accessibility Services’ offerings, yielded positive results (students were content with communication from Accessibility Services and felt supported) and we infer this is why fewer changes were identified by administrators as stemming from these sections.
For Starting Point – administration used enrollment data (completion of each of the program’s “stages”, completion of the program as a whole) to assess the program’s quality. Students were required to complete written reflections before and after starting the program, but these were not too helpful to administrators given their small sample size – only about 30 students out of 800 who used Starting Point completed the reflections, and each response was effusive, leading a staff member to reflect on likely selection bias. Indeed, the enrollment numbers told a vastly different story – very few students who enrolled completed the full program. Based on the completion data, administration restructured the program into a tiered system. By completing each stage – bronze, silver, gold – students received a CCR credit. The actual material or promotion of the program, however, did not change. Following the restructuring, Starting Point’s assessment tools yielded much better numbers (more students completed each tier) and rave reviews from students who completed reflections. However, while the assessment strategy and restructuring methods understood that the Strategic Plan goals existed and generally revolved around working towards student growth and development, it did not explicitly take the Strategic Plan goals the program identified in its SPA into account.