최고의 연구중심의과대학은? - 의과대학의 연구역량 평가를 위한 새로운 모델 - 

What Makes a Top Research Medical School? A Call for a New Model to Evaluate Academic Physicians and Medical School Performance

Matthew J. Goldstein, MD, PhD, Mitchell R. Lunn, MD, and Lily Peng, MD, PhD



1910년 플렉스너 보고서가 나온 이후, 의학교육은 학생들에게 교육과 임상실습의 최저기준을 충족시키기 위한 많은 변화를 겪어왔다.면허발급기관과 인증기관이 의학교육의 질을 높이고자 하는 많은 노력을 해왔음에도, 최고의 진료를 할 수 있는 교육, 최고의 연구를 할 수 있는 교육이 어떤 것인지에 대해서 병확히 밝혀진 바는 없다. 비교분석연구(Comparative analysis)는 의과대학간의 차이를 이해하기 위한 강력한 도구이나 수행하기 쉽지 않다. 그 결과 U.S. News & World Report (USN&WR)에서 수행하는 분석이 미국 의과대학 간 비교의 기본이 되어버렸다. 의학교육자들은 좀 더 강력하고 공정한 접근법을 개발할 필요가 있으며, 이를 통해 각 의과대학의 수행능력을 보다 잘 이해할 필요가 있다. 특히 성공적으로 의사연구자를 양성해온 의과대학에 대한 더 깊은 이해와 철저한 평가가 필요하다. 

Since the publication of the Flexner Report in 1910, the medical education enterprise has undergone many changes to ensure that medical schools meet a minimum standard for the curricula and clinical training they offer students. Although the efforts of the licensing and accrediting bodies have raised the quality of medical education, the educational processes that produce the physicians who provide the best patient care and conduct the best biomedical research have not been identified. Comparative analyses are powerful tools to understand the differences between institutions, but they are challenging to carry out. As a result, the analysis performed by U.S. News & World Report (USN&WR) has become the default tool to compare U.S. medical schools. Medical educators must explore more rigorous and equitable approaches to analyze and understand the performance of medical schools. In particular, a better understanding and more thorough evaluation of the most successful institutions in producing academic physicians with biomedical research careers are needed. 


이 Perspective에서 저자들은 기초, 임상, 중개, 적용(basic, clinical, translational, and implementation) 연구를 포괄한 의과대학의 의사연구자 양성을 평가하는 새로운 모델을 제시하고자 한다. 이 모델은 접근가능한 객관적 관련 준거를 활용함으로써 USN&WR에서 사용된 주관적 준거를 대체하고자 하였다. 가장 중요한 준거가 무엇인지에 대한 국가적 토론을 촉진하고, 평가기준의 투명성 향상을 통해 궁극적으로 교육의 질 향상을 바란다. 

In this Perspective, the authors present a new model to evaluate medical schools' production of academic physicians who advance medicine through basic, clinical, translational, and implementation science research. This model is based on relevant and accessible objective criteria that should replace the subjective criteria used in the current USN&WR rankings system. By fostering a national discussion about the most meaningful criteria that should be measured and reported, the authors hope to increase transparency of assessment standards and ultimately improve educational quality.






  • In their 1977 study of medical school faculty, Cole and Lipton2 reported on both the objective and subjective nature of rankings, noting that “research and publication, eminence of faculty, training and research grants available, size of full-time faculty, and perceived effectiveness of training” all correlated with perceived quality (i.e., reputation). They found that although reputation is partially linked to institutional performance, “there is some evidence of a ceiling effect (Harvard) and a halo effect for schools affiliated with universities having national reputations.”


  • In baseball, the traditional system was perceived to be correct on the basis of a rationally congruent approach, where a player’s apparent skills determined his worth, but using statistical data has proven to be a better method of determining why teams win and lose


  • The most recognized modern comparative analysis of medical schools is performed and published by U.S. News & World Report (USN&WR).5 However, USN&WR relies heavily on subjective and premedical student performance measures
  • Although the USN&WR evaluation method has undergone numerous changes, it remains subjective and limited. Importantly, USN&WR’s objective criteria evaluate the quality of matriculating students rather than assessing the value added by undergraduate medical education.


  • Unfortunately, extant assessments have not provided parallel objective measures of medical school education in the United States.


  • In our model’s scoring system, each physician was given a score that incorporated data from four primary categories: 
    • publications, grants, clinical trials, and awards/honors. We obtained these data from..
      • MEDLINE (publication record), 
      • the NIH’s RePORTER (grant record), 
      • ClinicalTrials.gov (clinical trial record), and 
      • 37 official award rosters (honors and awards).



  • Scoring system
    • We assigned physicians one, two, or three points for each journal article published. Using the eigenfactor ranking system (www.eigenfactor.org), articles published in the top 1,000 journals that had an eigenfactor x ≥ 0.2 were assigned three points; 0.2 > x ≥ 0.1 were assigned two points; and x < 0.1 were assigned one point. The eigenfactor score, compared with the Thomson Reuters impact factor, more fairly assesses impact, with highly cited journals having more influence than lesser-cited journals, and corrects for self-citation.



  • Score calculations
    • In Table 2, we report the descriptive statistics for these four categorical scores for all U.S. medical school graduates. We capped each categorical score at the 99.9th percentile to limit the influence that outliers would have on the results and to ensure that the schools, which consistently produce successful academic physicians, are ranked higher than the schools that produce a small number ofhigh-performing individuals.  


  • The University of 
  • California, San Francisco, School of 
  • Medicine (UCSF) was ranked fourth by 
  • USN&WR, in part because the faculty, 
  • not the graduates, excelled in securing 
  • NIH grants. 
  • Our evaluation of UCSF graduates, however, placed the school at 17 because its graduates achieved fewer and lower-impact publications and grants. This finding highlights the important point that the measurement of faculty grants may not reflect the quality of education provided by a given school.


  • In a secondary analysis, we subdivided physicians by graduation decade to assess institutional performance trajectory over time.



  • Institutions without this priority, such as those that focus on creating primary care physicians, would likely be disadvantaged by this modeljust as they are with the USN&WR Best Medical Schools: Research rankings




 2015 Jan 20. [Epub ahead of print]

What Makes a Top Research Medical School? A Call for a New Model to Evaluate Academic Physicians andMedical School Performance.

Abstract

Since the publication of the Flexner Report in 1910, the medical education enterprise has undergone many changes to ensure that medical schools meet a minimum standard for the curricula and clinical training they offer students. Although the efforts of the licensing and accrediting bodies have raised the quality of medical education, the educational processes that produce the physicians who provide the best patient care and conduct the best biomedical research have not been identified. Comparative analyses are powerful tools to understand the differences between institutions, but they are challenging to carry out. As a result, the analysis performed by U.S. News & World Report (USN&WR) has become the default tool to compare U.S. medical schools. Medical educators must explore more rigorous and equitable approaches to analyze and understand the performanceof medical schools. In particular, a better understanding and more thorough evaluation of the most successful institutions in producing academicphysicians with biomedical research careers are needed. In this Perspective, the authors present a new model to evaluate medical schools' production of academic physicians who advance medicine through basic, clinical, translational, and implementation science research. This model is based on relevant and accessible objective criteria that should replace the subjective criteria used in the current USN&WR rankings system. By fostering a national discussion about the most meaningful criteria that should be measured and reported, the authors hope to increase transparency of assessment standards and ultimately improve educational quality.

PMID:

 

25607941

 

[PubMed - as supplied by publisher]


+ Recent posts