Campus / News / April 10, 2019

Incentives lead to higher eval participation

 

Information provided by Andy Mehl. Graphic by Michelle Dudley.

 

Since the introduction in Fall Term of an early grade release incentive for students who complete their course evaluations, participation has risen by over 30 percent in both the fall and winter.

Numbers provided by Professor of Chemistry Andy Mehl, who served on the Faculty Personnel Committee when the incentive was put in place, showed a response rate of 85.5 percent in the fall and 78.4 percent in the winter of this term. Over the previous three years, the average response rate had been 52.8 percent in the fall and 45.8 percent in the winter.

“We always like them to be higher, right? It’s more meaningful when we have a better response rate for the instructors, the administration, whoever’s looking at the results,” Mehl said.

Mehl explained that the response rate to course evaluations had typically hovered near 70 percent when the process was done via paper and pencil. In the past years, since moving online, it has lowered to around 50 percent.

Theatre Professor Liz Carlin-Metz, the current chair of the Faculty Personnel Committee, said they hoped raising participation would allow for a more accurate sense of students’ responses to a course

“You’ve got a class with 18 people in it and only four of the evaluations are returned, it skews the data,” Carlin-Metz said. “In order to get a sense of an accurate picture of the teaching evaluation, we need more participation.”

Mehl stated that the early grade incentive was put in place after examining other school’s strategies for raising participation. While some colleges entirely withhold grades until evaluations are filled out, this was seen as too extreme a step for Knox to take.

“I think they will get higher every year we do it … as students get used to doing them, sort of building a culture of having students do it,” Mehl said.

Carlin-Metz stated that she finds students’ responses valuable in assessing the effectiveness of a course and redesigning it. She finds the written comments to be more helpful than the ratings.

“A bubble doesn’t give me any context … I don’t know whether what they mean is the assignments I give aren’t getting at it or that the class discussion isn’t getting at it or the reading isn’t getting at it,” Carlin-Metz said.

Only individual faculty members see both the ratings and comments given by students. The Faculty Personnel Committee and Department Chair see professors’ ratings and further statistics, such as the trajectory of the professors’ ratings over time.

Carlin-Metz acknowledged the unhelpful nature of some comments left by students, citing reports from some faculty on receiving comments on their appearance. While Carlin-Metz did not consider this a major issue with the system, she acknowledged how such comments stand out.

“If I’ve taught nine courses this term and in one course’s evaluation that comment came up, that’s what I’d be focusing on because it would be so outrageous — we obsess on the critical,” Carlin-Metz said.

Carlin-Metz considered the evaluation of the assessment process to be a natural one for the institution to undertake and foresaw the school continuing to work to perfect the process.

“There’s many models, it’s just a case of assessing which one would be the most effective in student engagement,” Carlin-Metz said.

Dean of the College and Provost Kai Campbell confirmed that Knox has also examined course evaluations from the perspective of bias. Knox administered a study on the presence of bias along the lines of race, gender and academic division in professors’ evaluations.

Campbell explained the school’s interest in investigating the subject as being in response to the existing national conversation on disparities between the perception of different groups. As the school has been utilizing its current course evaluation system for the past three years, the existing data was seen as now sufficient for a formal study.

Campbell described the results of the study as revealing statistically significant differences in each of these areas studied, but also viewed the differences as ultimately small in nature.

The administration’s perception of the study was that it did not reveal any issues substantial enough to necessitate active change to the current course evaluation process, according to Campbell. He stated the results would inform future conversations on how to improve the infrastructure in place for the evaluation of faculty.

Carlos Flores-Gaytan

Tags:  course evaluation curriculum faculty faculty comittee

Bookmark and Share




Previous Post
Wooten retires after 28 years
Next Post
Campus Safety Log: Battery, thefts and damage to property




You might also like




0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *