Students don’t read course evaluation forms

Yet another reason for concern about the growing role these play in our constant regimes of assessment.

Researchers at a US university conducted an experiment in which they inserted patently false statements into end-of-module questionnaires and found that surprisingly high numbers of undergraduates answered that they were true.

More than two-thirds of students, 69 per cent, endorsed the statement that “the instructor took roll at the beginning, middle and end of every class meeting”.

9 thoughts on “Students don’t read course evaluation forms

  1. Don’t we fill out forms, evals, etc, with some care?

    I recently filled out a medical form in a doctor’s office. There was no office assistant to ask me to fill in areas I left blank or to correct obvious errors.

    The result, I was embarrassed to discover, was a complete mess.

  2. Interesting. I think I tend to take my evaluation forms more seriously than most philosophers (at least from the comments that I see on the web) and think that I’ve improved my teaching by taking them seriously, but I will admit to being very annoyed when students give me a score (either good or bad!) for the performance of TAs in my classes, since I have never had a TA in all of my time teaching. I then just have to assume that the students (usually about 1/4-1/3) who fill in the “TA performance” answers are not being serious. (This is on a form that I don’t get to design myself.)

  3. From the article: “In a follow-up survey, 113 students who were not involved in the experiment were asked how seriously they took course evaluations. On an 11-point scale, the average response was 6.8, but only one in five students said that they took evaluations seriously all the time. More than three-quarters of respondents (76 per cent) said that they sometimes took the process seriously, but at other times they “just bubbled in answers” to finish the survey quickly.”

    I wonder if those students were more attentive to the questions and instructions on the research survey form.

  4. They might have assumed the question was inadvertently mistaken, and then just interpreted it in the most obvious way.

  5. Or they might have assumed that this was a byzantine requirement the teacher ignored, and answered as they did to avoid getting their instructor in hot water.

  6. “More than two-thirds of students, 69 per cent, endorsed the statement that ‘the instructor took roll at the beginning, middle and end of every class meeting’.

    Nearly a quarter, 24 per cent, agreed that ‘the instructor was late or absent for all class meetings’, while 28 per cent said that it was true that ‘the instructor never even attempted to answer any student questions related to the course’.”

    I’m guessing the latter numbers- around 1/4, are more representative of students just not reading. If I was asked the question about taking roll and I liked the professor, if I did not have an n/a option I would say yes knowing that professors are judged by the scores they get, I wouldn’t want a professor I liked and felt was a good instructor to get a poor score from me because they didn’t do something stupid and which I would agree was unnecessary. I’d just say yes in order to ensure an overall score that matched with my feeling regarding the quality of instruction.

  7. Relevant and of possible interest:

    Stephen R. Porter, “Do College Student Surveys Have Any Validity?” The Review of Higher Education Fall 2011, Volume 35, No. 1, pp. 45–76.
    “In this article, I argue that the typical college student survey question has minimal validity and that our field requires an ambitious research program to reestablish the foundation of quantitative research on students. Our surveys lack validity because (a) they assume that college students can easily report information about their behaviors and attitudes, when the standard model of human cognition and survey response clearly suggests they cannot, (b) existing research using college students suggests they have problems correctly answering even simple questions about factual information, and (c) much of the evidence that higher education scholars cite as evidence of validity and reliability actually demonstrates the opposite.
    I choose the National Survey of Student Engagement (NSSE) for my critical examination of college student survey validity ….”
     
    Stephen R. Porter, “Self-Reported Learning Gains: A Theory and Test of College Student Survey Response,” Research in Higher Education (Nov 2012) DOI 10.1007/s11162-012-9277-0
    Abstract: Recent studies have asserted that self-reported learning gains (SRLG) are valid measures of learning, because gains in specific content areas vary across academic disciplines as theoretically predicted. In contrast, other studies find no relationship between actual and self-reported gains in learning, calling into question the validity of SRLG. I reconcile these two divergent sets of literature by proposing a theory of college student survey response that relies on the belief-sampling model of attitude formation. This theoretical approach demonstrates how students can easily construct answers to SRLG questions that will result in theoretically consistent differences in gains across academic majors, while at the same time lacking the cognitive ability to accurately report their actual learning gains. Four predictions from the theory are tested, using data from the 2006–2009 Wabash National Study. Contrary to previous research, I find little evidence as to the construct and criterion validity of SRLG questions.

    http://stephenporter.org/papers1/survey-methods/

  8. People who work on assessment pretty universally agree that Student Evaluations of Teaching (SETs) should NEVER be used in isolation, but a lot of the research points to their correlation with student learning. I work as a faculty developer, and we do a lot of work trying to get departments and administrators to only every consider them as one point of data (the whole faculty development field pushes for this).

    This blog post actually covers the research pretty well, and the comments section adds a good discussion of the research: http://cte.rice.edu/blogarchive/2015/07/09/studentevaluations

    Linda Nilson is a researcher who believes their only value is in measuring student satisfaction (not learning): “Time to Raise Questions About Student Ratings,” To Improve the Academy: Resources for Faculty, Instructional, and Organizational Development, Volume 31 (http://www.wiley.com/WileyCDA/WileyTitle/productCd-1118257812.html). She does talk of creating your own evidence of student learning, which in many cases you would be able to include in tenure or promotion dossiers: http://www.facultyfocus.com/articles/teaching-and-learning/boost-your-student-ratings-creating-evidence-student-learning/.

    Nira Hativa is a researcher who argues that they can rate teacher effectiveness in her book, Student Ratings of Instruction: Recognizing Effective Teaching.
    (http://www.amazon.com/Student-Ratings-Instruction-Recognizing-Effective/dp/1500300691/ref=sr_1_1?s=books&ie=UTF8&qid=1458570126&sr=1-1&refinements=p_27%3ANira+Hativa).

    There are also things you can do to improve your scores and the response rate. Give a midterm survey and then discuss the results with your class. This gives you a chance to correct misconceptions or problems before the end-of-semester survesy. Sometimes there are simple things you can do that don’t water down the course, but make students way happier. It also gives you a chance to tell them (or reiterate) why you’re teaching the course the way you are. This both shows them respect and reinforces the idea that their opinion matters to you and that you take it seriously. It tends to improve end-of-semester (midterm surveys of teaching correlate with improved end-of-semester scores, even if you don’t look at them). Cornell has a sample survey here: https://www.cte.cornell.edu/teaching-ideas/designing-your-course/student-evaluations.html.

    Just talking about evaluations with students helps to improve the response rate. Vanderbilt has a nice overview of this kind of discussion along with ideas for interpreting them: https://cft.vanderbilt.edu/guides-sub-pages/student-evaluations/.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s