가짜 교수의 의심스러운 수업: 의과대학생들의 생각없는 강의평가 (Med Educ, 2015)

A curious case of the phantom professor: mindless teaching evaluations by medical students

Sebastian Uijtdehaage & Christopher O’Neal






강의평가의 타당성에 대한 의문과 비뚤림에 대한 우려가 심해지고 있다. 또한 진실되지 못하다거나 학생들이 생각없이 평가를 해서 그 타당성을 해한다는 연구도 잇다.

Unfortunately, there is a burgeoning body of research from undergraduate and professional courses at North American campuses that casts serious doubt on the validity of SETs,1 suggests they are rife with biases2,3 and even untruths,4,5 and indicates that students may complete evaluations in a mindless manner that further harms the validity of the process.6



우리는 8주 과목에 가상의 교수자를 집어넣었다. 이 교수자에게는 다른 교수들과는 확실히 구분되는, 성-중립적 이름을 넣고, 일반적 강의제목(폐질환 개론)을 넣었다. 학생들은 과목 종료 후 2주 이내에 이 가상의 교수자를 포함한 모든 교수에 대해서 강의평가를 하게 되었다.

We inserted one fictitious lecturer into the evaluation forms for two 8-week, pre-clinical, classroomstyle courses (for the Year 2 class of 2010 and the Year 1 class of 2011). We gave these ‘lecturers’ gender-ambiguous names (e.g. ‘Pat Turner’, ‘Chris Miller’) that were distinct from existing names, and added generic lecture titles (e.g. ‘Introduction, Lung Disease’) (Table 1). Students were required to submit their anonymous ratings of all lecturers, including the fictitious ones, within 2 weeks after the course using our online evaluation system CoursEval (ConnectEDU, Inc., Boston, MA, USA).


학생들은 '적용불가능함' 이라는 옵션을 선택할 수도 있었다.

Students could choose not to evaluate a lecturer by marking the option ‘Not Applicable’.


그 다음 해에, 우리는 같은 절차를 매력적인 젊은 모델의 사진을 넣어서 다시 시행했다. 실제로 교육에 참여한 교수는 23명에서 52명 사이였다.

The following year, we repeated this process (in the classes of 2011 and 2012), but also included a small portrait (150 9 150 pixels) of an attractive young model who, perhaps regretfully, did not resemble any of our faculty members. The number of actual lecturers in each course ranged from 23 to 52, most of whom were depicted in portraits of similar dimensions in our evaluation system.



심지어 많은 학생들은 가상의 교수자의 교육 수행능력에 대한 코멘트를 남기기도 했다. 3명의 학생은 이 강의를 기억하지는 못하지만 이런 강의가 있었으면 한다라고 기술하였지만, 또 다른 3명의 학생은 '정말 훌륭한 강의였습니다' 와 같이 지어내서 기술하기도 했다.

A handful of students even went so far as to provide comments on the performance of the fictitious lecturers. Although three students explicitly stated that they did not recall the lectures but wished they had (‘I don’t think we had this lecture but it would have been useful!’), three other students confabulated: ‘She provided a great context’; ‘Lectures moved too fast for me’, and ‘More time for her lectures’.






이런 생각없는 평가는 근래의 문제만은 아니다. 1977년의 Reynolds의 연구와도 상통한다. 학생들은 취소되어서 실제로는 진행되지도 않은 영화수업에 대해서 실제 진행된 역사수업보다 높은 점수를 주었다. 

Mindless evaluation is not a modern problem. Our findings echo those described by Reynolds in a landmark 1977 paper.7 Like us, he serendipitously found that a vast majority of undergraduate psychology students rated a movie on sexuality higher than a lecture on the history of psychology, although in fact neither event had taken place (due to cancellations). Where our scenario becomes more problematic than that described by Reynolds7 is in the unique structure of a medical curriculum, in which a multitude of instructors teach in the same course and are evaluated in bulk by students.



Dunegan과 Hrivnak은 강의평가의 세 가지 위험요소를 꼽았다.

Dunegan and Hrivnak6 describe three risk factors that may encourage mindless evaluation practices: 

(i) the cognitively taxing nature of SETs; 

(ii) the lack of perceived impact of SETs on the curriculum, and 

(iii) the degree to which the evaluation task is experienced as just another routine ‘chore’. 

Clearly, all of these risk factors may present themselves in a medical school environment.


  • 첫 번째: With regard to the first risk factor, evaluating teachers is a cognitively demanding task when it is done conscientiously weeks after the fact.
  • 두 번째: Chen and Hoshower9 use expectancy theory to show that students are less motivated to partake in an activity such as evaluation if they fail to see the likelihood that the activity will lead to a desired outcome (e.g. teacher change).
  • 세 번째: Lastly, medical students can certainly be forgiven for finding evaluations to be painfully routine and burdensome. As SETs are part of the mainstay of teaching assessment, medical students fill out evaluations numerous times per year during all 4 years of their training




Dunegan과 Hrivnak의 프레임워크를 기반으로, 강의평가의 대안을 마련하였다. 예컨대 코스를 시작할 때 일부 학생들이 '전향적(후향적이 아닌) 평가자'가 되는 것이다. 이들에게 전문성훈련을 시키는 과정 중에서 평가도구의 효과적 활용에 대하여 먼저 교육을 받는다. 또한 팀으로서 이 학생들은 건설적 피드백을 제공하는 방법을 연습한다. 이 학생들은 공동으로 종합적 보고서를 작성하여 과목 책임교수와 각 교수들에게 전달한다. 평가도구는 predictive evaluation에 초점을 두며 opinion-based evaluation이 안다. predictive evaluation이 같은 결과를 내면서도 더 적은 응답만을 요구함이 연구된 바 있다. 교수자들은 이 보고서에 대해 응답할 것을 요구받을 수도 있으며, 어떻게 이러한 피드백이 활용되거나 활용되지 않을 것인가를 설명해야 한다. 학생 평가팀을 통해서 많은 것을 얻을 수 있을 뿐더러 이러한 방법은 학생들이 '생각을 하며' 평가를 하도록 하며, 더 중요하게는 교육개선과 승진결정에 더 강건한 기반을 제공한다.

Using the framework described by Dunegan and Hrivnak,6 we can conceive of an alternative approach to the SET that may mitigate the risk factors described here. For example, at the beginning of a course, a sample of students in a class could be charged to be prospective (not retrospective) course and faculty evaluators. As part of their professionalism training, these students could first be educated in the effective use of evaluation tools that can be employed in situ (e.g. with hand-held devices) and that do not rely on the activation of episodic memory. As a team, these students could practise providing constructive feedback when, upon completion of the course, they collaborate on a comprehensive report to the course chair and teachers involved in the course. Evaluation tools could be focused on predictive evaluations (e.g. by asking students to predict their peers’ opinions of a teacher) rather than on opinion-based evaluations; predictive evaluations have been shown to require fewer responses to achieve the same result.10 Faculty members could be required to respond to these reports and explain how the feedback is to be used or not used so that students understand the impact of their efforts. In addition to the educational benefit to be derived from practising teamwork and providing constructive feedback, such an approach may engage students in a mindful way and, importantly, may yield information that provides a more robust foundation for programme improvement and promotion decisions.



'학생들이 참석하지도 않은 수업에 평가할 정도로 능숙해지면서 학기가 끝날 때 까지 기다릴 필요도 없게 되었다'

The present study should raise a red flag to medical schools in which students are asked to evaluate numerous lecturers after a time delay. It defies common sense (and a huge body of literature) to expect that such an evaluation approach procures a solid foundation on which decisions regarding faculty promotions and course improvement can be based. If we continue along this path, we may just as well follow Reynolds’s tongue-in-cheek suggestion that ‘as students become sufficiently skilled in evaluating [. . .] lectures without being there, [. . .] there would be no need [for them] to wait until the end of the semester to fill out evaluations’.7









 2015 Sep;49(9):928-32. doi: 10.1111/medu.12647.

curious case of the phantom professormindless teaching evaluations by medical students.

Author information

  • 1Center for Educational Development and Research, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California, USA.

Abstract

CONTEXT:

Student evaluations of teaching (SETs) inform faculty promotion decisions and course improvement, a process that is predicated on the assumption that students complete the evaluations with diligence. Anecdotal evidence suggests that this may not be so.

OBJECTIVES:

We sought to determine the degree to which medical students complete SETs deliberately in a classroom-style, multi-instructor course.

METHODS:

We inserted one fictitious lecturer into each of two pre-clinical courses. Students were required to submit their anonymous ratings of all lecturers, including the fictitious one, within 2 weeks after the course using a 5-point Likert scale, but could choose not to evaluate a lecturer. The following year, we repeated this but included a portrait of the fictitious lecturer. The number of actual lecturers in each course ranged from 23 to 52.

RESULTS:

Response rates were 99% and 94%, respectively, in the 2 years of the study. Without a portrait, 66% (183 of 277) of students evaluated the fictitious lecturer, but fewer students (49%, 140 of 285) did so with a portrait (chi-squared test, p < 0.0001).

CONCLUSIONS:

These findings suggest that many medical students complete SETs mindlessly, even when a photograph is included, without careful consideration of whom they are evaluating and much less of how that faculty member performed. This hampers programme quality improvement and may harm the academic advancement of faculty members. We present a framework that suggests a fundamentally different approach to SET that involves students prospectively and proactively.

© 2015 John Wiley & Sons Ltd.

PMID:
 
26296409
 
[PubMed - in process]


+ Recent posts