By Maia de Caro
My project is rooted in trying to understand assessment practices in the division of Student Life. To give me a more specific focus, I looked into the Signature Program Assessments (SPAs), a unit-level assessment tool connected to actionable goals outlined in the Strategic Plan. Since there were no SPAs running at the time I conducted my research, I decided to look into Starting Point, a program intended to help first-year students become familiarized with the university, which is one of the units that had completed a SPA in the previous cycle.
Surveys are one of the most common tools used in assessment practices because they are relatively easy to administer and collect large volumes of data for analysis. Depending on their design, they can incorporate both quantitative and qualitative data, offering a mix of numerical feedback, such as participation rates and written responses that provide deeper insight into individual experiences. They serve various purposes, such as collecting feedback at the end of a program to understand how participants felt about their experience or at the beginning during registration to explore why individuals chose to participate. They can also be formatted in different ways, from open-ended questions requiring written responses, where students describe their personal perspectives, to scaled questions, where participants rate their experiences on a scale from 1 to 10 or from “very bad” to “very good.” Their primary purpose is to gather data that provides a broad understanding of an experience from a specific group of people who have used or participated in something.
However, despite their popularity, surveys have drawbacks, especially regarding neutrality. A major issue noted by my interlocutors is that they can be time-consuming, causing people to not like filling them out. Moreover, there’s a paradox surrounding surveys. While they are considered one of the best tools for gathering data, they often yield very opinionated results as those who typically complete them tend to have strong views—either very positive or very negative—about the program. This creates a skewed dataset, as the neutral voices are often absent. The lack of neutrality can make it difficult to make balanced, well-informed decisions about future program improvements.
On the flip side, surveys can also suffer from an excess of neutrality, depending on how they are structured. For instance, in scaled surveys where participants are asked to rate their experience on a scale like “very bad,” “bad,” “neutral,” “good,” or “very good,” many people choose the neutral option. This can lead to inconclusive results, as it doesn’t provide much insight into how participants feel. One of my interlocutors noted how, when implementing surveys, they try to minimize this by using a scale that only includes “very bad,” “bad,” “good,” and “very good,” forcing participants to take a side and helping them better understand the general sentiment.
Ultimately, the paradox of neutrality in surveys—sometimes too much, other times too little—complicates the use of surveys in assessment practices. While surveys have the advantage of being easily digestible and offering a broad data set, they often need to be supplemented with other forms of information to provide a fuller, more accurate picture.