학부 의학교육프로그램의 평가 강화

Enhancing Evaluation in an Undergraduate Medical Education Program

Kathryn A. Gibson, BMBCh, PhD, Patrick Boyle, MEd, Deborah A. Black, PhD, Margaret Cunningham, MSW, Michael C. Grimm, MBBS, PhD, and H. Patrick McNeil, MBBS, PhD




UNSW의 새로운 학부의학교육 과정 도입

The implementation of a new undergraduate medical education program at the University of New South Wales (UNSW) in Sydney, Australia, brought many challenges, not the least of which was determining whether the new program was effective. In this article, we describe the development and initial progress in implementing a comprehensive whole-program evaluation and improvement strategy for the new undergraduate medical education program at UNSW. Our literature review found relatively little practical guidance about establishing systems to define and maintain the quality of undergraduate medical education programs that encompass multiple aspects of quality. (...)



배경: 전체 프로그램에 대한 평가틀 개발

Background: Development of a Whole-Program Evaluation Framework

 

새 교육과정

A new Medicine program

 

교육과정 개요

In March 2004, the Faculty of Medicine at UNSW implemented an innovative six-year, three-phase undergraduate Medicine program.1 Compared with the previous content-based Flexnerian-style curriculum, the new program is explicitly outcome based,...

      • requiring students to demonstrate specified levels of performance in a range of medicine-specific capabilities (biomedical science, social aspects of health, clinical performance, and ethics) and 
      • generic capabilities (critical evaluation, reflection, communication, and teamwork) at defined levels as they progress through the program. All courses are interdisciplinary and highly integrated both horizontally and vertically

Important features include...

      • early clinical experience, 
      • small-group teaching, 
      • flexibility in courses and assessments, and 
      • a high degree of alignment between graduate outcomes, learning activities, and assessments. 

Each two-year phase uses a distinct learning process which aims to develop autonomous learning progressively during six years. 

The approaches emphasize important adult education themes

      • student autonomy, 
      • learning from experience, 
      • collaborative learning, and 
      • adult teacher–learner relationships.


새 교육과정에서의 평가시스템(포트폴리오)

The program’s assessment system is particularly important and incorporates many novel features, including...

      • criterion referencing of results, 
      • interdisciplinary examinations, 
      • a balance between continuous and barrier assessments, 
      • peer feedback, and 
      • performance assessments of clinical competence. 

To examine the generic capabilities that are not easily measured by traditional assessments, and to ensure overall alignment between assessments, learning, and outcomes, a data-driven portfolio examination occurs in each phase.2

      • The portfolio assessment is supported by an information technology system, eMed,3 which records student assessment grades, examiner feedback on assessment tasks, and peer comments on teamwork skills. 
      • At the end of each two-year phase, students submit evidence of achievement for each capability using reflections on and reference to their performance recorded in eMed, compared with the expected level of achievement for that phase of the program. (...)
      • Portfolio examiners look at the patterns of grades, teamwork comments, and other evidence cited by a student to see what lies behind the student data, and then they make a judgment from all the evidence. The portfolio assessment system is discussed extensively elsewhere.2,3


관리 주체(OME and Interdisciplinary groups)

In contrast to previous arrangements in which individual disciplines and departments managed content-specific components of the old curriculum, responsibility for the planning and implementation of the new program was given to a single unit—the Office of Medical Education under the direction of the faculty’s associate dean of education. However, as part of a planned change management process to maintain faculty ownership and encourage disciplinary integration, much of the work was devolved to interdisciplinary course design and implementation groups, each responsible for the delivery of one or more modular, integrated, eight-week courses, or a limited number of vertical strands, such as clinical and communication skills and ethics. The interdisciplinary groups were assisted by instructional designers and/or included faculty members with pedagogical training or experience.


 

효과적인 평가의 필요성

The need for effective evaluation

 

인증기구(AMC)의 요구조건

Such major pedagogical and organizational change required the development of systems and processes to 

(1) evaluate the effectiveness of the change, 

(2) monitor its implementation to enable continual improvement, and 

(3) use evaluation to recognize and report on excellence in teaching. 

An important driver was the accreditation authority, the Australian Medical Council (AMC), which requires highly specific standards with respect to ongoing monitoring, evaluation, feedback, and reporting of schools’ operations,4 similar to those specified by the Liaison Committee for Medical Education for North American medical schools.5 The AMC lists nine relevant standards that include...

      • the need for “ongoing monitoring … [of] curriculum content, quality of teaching, assessment and student progress,”
      • the need for “teacher and student feedback,” and “using the results [of evaluation] for course development.” 
      • Schools are required to analyze “the performance of student cohorts … in relation to the curriculum and the outcomes of the medical course” and “evaluate the outcomes of the course in terms of postgraduate performance, career choice and career satisfaction.” 
      • Finally, schools are required to report “outcome evaluation … to academic staff, students … [and] … the full range of groups with an interest in graduate outcomes.”4


이러한 통합 교육과정 기준을 맞추기 위해서 많은 혁신이 필요하다

Meeting these standards for an integrated program containing many innovative features required the development of new strategies and approaches to evaluation and improvement. 

      • Initially, we conducted a review of previous evaluation processes at UNSW and found these to be largely content driven and not integrated into an overall program evaluation framework. 
      • We then reviewed the medical and health education literature, which acknowledges the difficulty in conducting methodologically rigorous research or evaluation studies in education.6 
      • We found many examples of evaluations of individual aspects of particular programs but few descriptions of evaluations of multiple aspects of a program and very little around the optimal strategy for systematically evaluating and then improving a medical program as a whole. Of those reviewed, two examples stand out for their breadth of evaluation approach adopted—the McGill dental curriculum 7 and the medical program at Dundee University.8 (...)

Dundee대학의 방법

The University of Dundee group, in describing its approach to evaluation of the medicine curriculum introduced in 1995,8 used evidence from a number of sources, which included internal reviews, external reviews, and student examination data. (...)


학부 교육과정과 관련한 기존 연구가 많지 않음.

Overall, we found little practical guidance in developing frameworks for systematic evaluation of undergraduate educational programs; the limited literature...

      • that does exist is more applicable to GME.9–11 In the wider evaluation literature, 
      • a topic of currency is the importance of program evaluation leading to explicit actions, particularly improvement for students and other key stakeholders,12–14 
      • an issue commonly referred to as “closing the loop.”15



다요소모델 개발

Developing a Multicomponent Model to Evaluate Program Quality

 

프로그램평가 및 향상 그룹 설립

To develop a comprehensive evaluation process, a Program Evaluation and Improvement Group (PEIG) was established by the dean of medicine with administrative support from the Office of Medical Education and the associate dean of education. (...)


PEIG의 주 과업 

A primary task of the PEIG was to establish a framework or model that we considered would encompass all facets of the curriculum at UNSW. With the AMC standards 4 as an important driver, we formulated six strategic principles to guide development of a program evaluation and improvement strategy (List 1). (...)



위의 원칙에 따른 프로그램평가 모델 개발

With these issues and ideas in mind, we undertook the development of a progressive model for program evaluation and improvement. (...)





핵심 quality indicators 만들 때 염두에 둔 것들

In deriving the key quality indicators, we considered the following questions: 

      • (1) Would the indicator facilitate action for improvement? 
      • (2) Is valid information available for the indicator? 
      • (3) Will the indicator convey clear meaning to stakeholders? 
      • (4) Would the information content of the indicator be considered trustworthy and valuable? and 
      • (5) Does the indicator represent a balance of perspectives for multiple stakeholders? 
      • In addition, we were cognizant of the general principles of content validity and relative importance. 

(...)

 

Ownership과 이해관계자들의 참여의 중요성

We recognized that any such model can only be effective if there is ownership of it by key stakeholders and if these stakeholders are actively participating in the processes that are developed as an extension of the model. A key practice principle we adopted to implement the model is that, wherever possible, evaluation activities should be viewed as an integral part of day-to-day practice rather than as externally imposed additional tasks. Thus, an important philosophy is that teachers, course and phase convenors, and relevant administrators should undertake evaluation and improvement activities as an inherent part of effective and scholarly teaching. (...) The PEIG sees its principal role as supporting and facilitating this process, rather than being an external body undertaking such tasks. (...)



각 영역의 평가와 향상

Addressing Evaluation and Improvement of Each Quality Aspect

 

(...) Before establishing specific projects, each working party reviewed their respective indicators (Table 1) to identify relevant available data, areas where instrument development needed to occur, and the reporting, communication, and implementation implications for indicator development. To illustrate the evaluation model in practice, two areas of work to date are elaborated on below.


 

학생경험의 평가와 향상

Evaluation and improvement of the medical student experience

 

네 단계

In 2005, the student experience working party established a baseline project to research and evaluate the medical student experience at UNSW and to establish a sustainable process for its evaluation and improvement. Four stages were involved: 

(1) identifying the students’ understanding of the construct “student experience,” 

(2) developing a trial instrument and process for collecting evaluative data from students, 

(3) undertaking appropriate analyses and communication of findings to stakeholders, and 

(4) establishing an agreed and sustainable process within the faculty for evaluation and improvement of the medical student experience. 


우선 학생들이 'student experience'라는 단어를 어떻게 이해하는지 조사함. 다섯가지 영역 도출

Before developing any instrumentation for tapping student perceptions of their experience as medical students at UNSW, we decided to investigate what students understood by the term student experience. (...) Five main facets emerged and were adopted for the trial medical student experience questionnaire (MEDSEQ): 

      • learning, teaching and assessment; 
      • organization and student understanding of the program; 
      • community interaction and value; 
      • student support; and 
      • resources. 

These encompassed most of the key quality indicators for the student experience aspect shown in Table 1.


The trial MEDSEQ 

The trial MEDSEQ used 32 fixed-response items arranged under the five facets referred to above, plus the option of open-ended comments to collect student participants’ responses. We used a five-point scale (ranging from only rarely to almost always) for students to assess the frequency with which they had experienced the circumstances described by each facet (Chart 1). (...)


In early May 2007, after review of the major MEDSEQ report, the Faculty of Medicine’s principal curriculum committee endorsed embedding the MEDSEQ evaluation and improvement process in the Faculty of Medicine’s continuing operations. Data gathering, analysis, and communication of findings will occur every two years, and a range of responses to the findings in the inaugural report will be initiated to ensure provision of more effective feedback on learning and increased availability of appropriate mentoring.





 

교육 및 교육업무량 관리의 평가와 향상

Improvement of teaching and the management of teaching workload

 

두 가지 측면에 대한 key quality indicator 평가

The working party on staff and teaching has been addressing two of the key quality indicator components for this program aspect: support for teaching and improving the quality of teaching (Table 1)

      • With respect to support for teaching, we developed a policy document and framework for implementation entitled “Managing Teaching Workloads in the Faculty of Medicine,” which was endorsed by the faculty’s major management committee and is being used by department heads to help manage their teaching and research missions. (...)
      • With respect to improving the quality of teaching, guided by the principle of “closing the loop,” efforts are being made to enhance the extent to which student evaluative data are acted on to implement improvement, stimulate reflection, and foster more scholarly activities in relation to learning and teaching. Improving teaching on the basis of student feedback has proved challenging at UNSW in the past because feedback on individuals’ teaching is reported only to the teacher affected and is not available to curriculum leaders or coordinators. (...) 


Expected Outcomes and Initial Evidence of Effectiveness

 

기대 성과

The key outcomes we expect as a result of the processes and strategies we have implemented can be distilled to the following: 

      • multiple aspects of the quality of the UNSW Medicine program are able to be evaluated and reported (e.g., student and staff experiences, student outcomes, content and resources); 
      • the evaluation process is continual; 
      • action to improve the program follows measurement; and 
      • the processes we describe are an inherent part of the responsibility of teachers and course coordinators. 

(...)

 

평가 결과

Phase 1 (medical school years 1 and 2) of the program consists of four integrated eight-week courses per year, the majority of which have been formally evaluated using a standard UNSW course evaluation instrument. As shown in Table 2, initial evaluations in 2004 showed strong evidence that the new program encouraged active student participation and collaborative learning (92% and 97% agreement, respectively). However, students expressed significant dissatisfaction with the provision of feedback and with assessment tasks. (...)


Over subsequent years, there has been significant evidence of improved student experience of phase 1 with progressively greater levels of agreement that the courses provided adequate information about assessments and that the assessments were appropriate (both up to 81% agreement) (Table 2). (...)











 2008 Aug;83(8):787-93. doi: 10.1097/ACM.0b013e31817eb8ab.

Enhancing evaluation in an undergraduate medical education program.

Abstract

Approaches to evaluation of medical student teaching programs have historically incorporated a range of methods and have had variable effectiveness. Such approaches are rarely comprehensive, typically evaluating only a component rather than the whole program, and are often episodic rather than continuous. There are growing pressures for significant improvement in academic program evaluation. The authors describe an initiative that arose after a radical reorganization of the undergraduate medical education program at the University of New South Wales in part in response to feedback from the accrediting authority. The aim was to design a comprehensive, multicomponent, program-wide evaluation and improvement system. The framework envisages the quality of the program as comprising four main aspects: curriculum and resources; staff and teaching; student experience; and student and graduate outcomes. Key principles of the adopted approach include the views that both student and staff experiences provide valuable information; that measurement of student and graduate outcomes are needed; that an emphasis on action after evaluation is critical (closing the loop); that the strategies and processes need to be continual rather than episodic; and that evaluation should be used to recognize, report on, and reward excellence in teaching. In addition, an important philosophy adopted was that teachers, course coordinators, and administrators should undertake evaluation and improvement activities as an inherent part of teaching, rather than viewing evaluation as something that is externally managed. Examples of the strategy in action, which provide initial evidence of validation for this approach, are described.

PMID:
 
18667897
 
[PubMed - indexed for MEDLINE]


+ Recent posts