의학교육에서 행동과학과 사회과학 역량 평가를 위한 도구들: 체계적 종설 (Acad Med, 2016)

Tools to Assess Behavioral and Social Science Competencies in Medical Education: A Systematic Review

Patricia A. Carney, PhD, Ryan T. Palmer, EdD, Marissa Fuqua Miller, Erin K. Thayer, Sue E. Estroff, PhD, Debra K. Litzelman, MD, Frances E. Biagioli, MD, Cayla R. Teal, PhD, Ann Lambros, PhD, William J. Hatt, and Jason M. Satterfield, PhD




2004 년 보고서에서 IOM (Institute of Medicine)은 조기 사망률 및 사망률의 원인 중 50 %가 행동 및 사회적 요인과 관련되어 있지만 이러한 영역에서 의과 대학 교과 과정은 불충분하다고 결론지었습니다 .1-3 IOM이 강조한 행동 및 사회적 영역에는

  • (1)건강 및 질병에서의 심신 상호 작용, 

  • (2) 환자 행동, 

  • (3) 의사 역할 및 행동, 

  • (4) 의사 - 환자 상호 작용, 

  • (5) 헬스케어의 사회문화적 이슈

  • (6) 건강 정책과 경제 

...등이 있다. IOM은 26개의 우선 순위 주제를 확인했다. 

In a 2004 report, the Institute of Medicine (IOM) concluded that, although 50% of the causes of premature morbidity and mortality are related to behavioral and social factors, medical school curricula in these areas are insufficient.1–3 The behavioral and social science (BSS) domains that the IOM deemed critical in their report included (1) mind–body interactions in health and disease, (2) patient behavior, (3) physician role and behavior, (4) physician–patient interactions, (5) social and cultural issues in health care, and (6) health policy and economics.1 Within these six domains, the IOM identified 26 high-priority topics, such as health risk behaviors, principles of behavior change, ethics, physician well-being, communication skills, socioeconomic inequalities, and health care systems design.1


또한, LCME는 BSS 영역 5에서 의과대학 인정을 위한 요구 사항의 일부로서, BSS분야에서 전문직과 대중이 의사에게 기대하는 능력을 확인하도록 요구하고 있습니다. 의과 대학은 학습자의 이러한 역량에 대한 진전 및 성취를 입증하기 위해 내용 기반 평가와 결과 기반 평가를 사용해야합니다. 그렇게하기 위해 많은 학교에서는 전문 ACGME 핵심 역량 인 전문성, 의학 지식, 환자 간호, 대인 관계 기술 및 의사 소통, 시스템 기반 실습, 실습 기반 학습 및 개선을 사용합니다.

In addition, the Liaison Committee on Medical Education (LCME) incorporates, as part of its educational program requirements for accreditation, BSS domains5 and requires that schools identify the competencies in these areas that both the profession and the public can expect of a practicing physician. Medical schools must use both content and outcomes-based assessments to demonstrate their learners’ progress toward and achievement of these competencies. To do so, many schools use the broad ACGME core competencies—professionalism, medical knowledge, patient care, interpersonal skills and communication, systems-based practice, and practice-based learning and improvement.6



그러나 BSS 커리큘럼에 대한 다양한 교육 모델 또는 교육용 디자인의 효과를 평가하는 데 도움이 될 수 있는 평가 도구의 표준화가 결여되어 있어서 의과 대학에서 수집 한 평가 데이터를 모으는 것이 어렵다. 

This lack of standardization makes it difficult to pool evaluation data collected across medical schools, which could help evaluate the effectiveness of different training models or instructional designs for BSS curricula.


또한, 신뢰할 수있는 전문 활동이나 이정표 달성 수준을 결정하고 엄격한 교육 연구를 수행하는 경우 역량 개발 측정이 유효해야합니다. 그러나 종종이 중요한 단계를 완전히 건너 뛰거나 완전히 완료하지 않거나 신뢰할 수있는 결과를 산출하는 데 필요한 엄격함이 부족합니다.

Moreover, determining the levels of achievement of entrustable professional activities or milestones7 as well as conducting rigorous educational research require that measures of competency development are validated. However, often this important step is skipped entirely, not fully completed, or lacks the rigor needed to produce reliable results.




Method


Guiding principles


We used the Best Evidence Medical and Health Professional Education Guide8 in our systematic review.


To accomplish this step, we analyzed the LCME accreditation requirements,5 which are divided into five sections: 

    • (1) institutional setting (e.g., governance and organizational environment);

    • (2) educational program for the MD degree (e.g., objectives, learning environment and approach, structure in design and content); 

    • (3) medical students (e.g., student demography, admissions, student services); 

    • (4) faculty (e.g., qualifications, personnel, organization and governance); and 

    • (5) educational resources (e.g., faculty background and time, finances and facilities).


To focus our review, we selected components from the LCME’s Section II: Educational Program for the MD Degree (ED) and focused specifically on educational content. (The LCME standards provided more detail than the ACGME milestones, and thus we relied heavily on the LCME verbiage as we refined our review.)


Search terms



Inclusion/exclusion criteria


We sought to include articles reporting on some form of validity or reliability testing in more than one learning setting for BSS competency assessment measures.



Methods for data abstraction


Methods for assessing instrument quality and study design


For example, 

    • a high-quality article was one that applied a validated BSS instrument (either from the published literature or the included article) using a rigorous study design, such as a randomized controlled trial. 

    • A low-quality article was one that applied an unvalidated measure of BSS competency and used a weak study design to measure the impact of the educational intervention, such as a post-intervention survey of student satisfaction.


We categorized the level of evidence supporting each BSS competency assessment instrument and study design as weak, moderate, or strong. 

    • The weak evidence category included studies containing limited information on the validity and/or reliability of the evaluation instrument or a weak study design, such as a single-group pre–post design. 

    • The moderate evidence category included studies that provided some information about the reliability of the measures used but were not assessed rigorously, retested in the study sample, or had a moderately strong study design, such as a single-group historical cohort assessment. 

    • The strong evidence category included studies in which the evaluation instruments were tested rigorously in the study population and used a strong study design, such as a randomized controlled or crossover trial design.


Methods for article categorization, data entry, and analysis


Articles identified for data abstraction were classified into three categories: 

    • (1) 도구 개발 instrument development with psychometric assessment only, defined as articles devoted to the statistical validation of a new or existing competency tool, such as a measure of physician empathy; 

    • (2) 교육 연구 educational research, defined as articles that used a specific study design and BSS competency assessment tool to draw conclusions about a defined educational research question; and 

    • (3) 교육과정 평가 curriculum evaluation, defined as articles that assessed specific curriculum features.




결과

Results


Of these, we categorized 21 studies as instrument development with psychometric assessment only, 62 as educational research, and 87 as curriculum evaluation (see Supplemental Digital Appendix 2 at http://links.lww.com/ACADMED/ A328).


IRB리뷰

The majority of articles mentioned IRB review (13 of 20 instrument development studies, 35 of 48 educational research studies, and 36 of 46 curricular evaluation studies) with most getting approval or exemption (see Supplemental Digital Appendix 2). 


연구설계

  • Randomized study designs with or without controls were most common for educational research studies (23 of 48; 48%) compared with instrument development studies (1 of 20; 5%) and curricular evaluation studies (0 of 46; 0%), 

  • while prospective cohort pre–post designs were most common for curriculum evaluation studies (24 of 46; 52%) compared with educational research studies (6 of 48; 13%) and instrument development studies (1 of 20; 5%) (see Supplemental Digital Appendix 2). 


타당도

Validation using formal psychometric assessment was most common for instrument development (19 of 20; 95%) and educational research studies (25 of 48; 52%) compared with curriculum evaluation studies (17 of 46; 37%).


역량

  • The most common BSS learner competency assessed across all types of articles was communication skills (see Supplemental Digital Appendix 3 at http://links.lww.com/ACADMED/A328). Cultural competence and behavior change counseling (which included motivational interviewing) also were commonly assessed, especially in educational research and curriculum evaluation studies. 

  • Using the ACGME competency language, interpersonal skills and communication (in > 90% of included articles), patient care (> 62% of articles), and medical knowledge (> 43% of articles) were most commonly assessed, with practice-based learning and improvement (≤ 10% of articles) and systems-based practice (≤ 10% of articles) less commonly assessed (see Supplemental Digital Appendix 3).

  • Validated instruments that assessed knowledge, attitudes, and skills were most commonly used to evaluate BSS competencies (65%–85%), with standardized patients assessing learners’ performance being the second most common (30%–44%) (see Supplemental Digital Appendix 3).


강력한 근거를 보여주는 문헌

We ranked 33 articles (29%) as contrib- uting strong evidence to support BSS competency measures of communication skills, cultural competence, empathy/ compassion, behavioral health coun- seling, professionalism, and teamwork. Most of these were educational research studies (see Supplemental Digital Appendix 3).


기타

In Supplemental Digital Appendix 4, we provide additional details regarding the included articles. In Supplemental Digital Appendixes 5 and 6, we describe the 62 articles (54%) that yielded moderate evidence in support of a BSS assessment tool and the 19 articles (16.7%) that yielded weak evidence, respectively.


고찰

Discussion


우리는 의사 소통 기술을 평가하는 도구가 가장 엄격한 검증 및 연구 설계 접근법에 의해 뒷받침되었음을 알게되었습니다. 이 도구에는 표준화 된 환자와 함께 수행 된 평가뿐 아니라 지식, 태도 및 기술을 평가하는 필기 시험이 포함되었습니다. 전체적으로 실제 환자와 상호 작용하는 학습자의 직접적인 관찰을 사용한 평가가 부족했다. 이러한 접근 방식은 시간과 자원을 많이 필요로 하지만, 학습자 역량 평가에서 직접 관찰은 중요하다.123-126

We learned that tools assessing communication skills were supported by the most rigorous validation and study design approaches. These tools included both written tests assessing knowledge, attitudes, and skills as well as assessments conducted with standardized patients. Overall, we found a paucity of assessments that used the direct observation of learners interacting with actual patients. Although such approaches are time and resource intensive, several articles support the value of direct observation in assessing learner competencies.123–126


다른 우수한 평가는 문화적 능력, 공감 / 동정, 행동 변화 상담 (예 : 동기 부여 면담) 및 전문성을 평가합니다. 그러나 고품질 평가 도구 하나만이 팀워크를 평가했습니다.

Other high-quality assessments evalu- ated cultural competence, empathy/ compassion, behavior change counseling (e.g., motivational interviewing), and professionalism. However, only one high-quality assessment tool, described in a 2008 article, evaluated teamwork.


교육자 및 교육 연구가는 학습자의 BSS 역량 평가를 위해 reinventing the wheel하기보다는, 기존의 검증 된 도구에 대한 문헌을 검토하는 편이 낫다.

We recommend that educators and educational researchers review the literature for established, validated tools to assess BSS competencies in their learners rather than reinventing the wheel.


이 검토를 완료하는 데있어서 가장 중대한 과제 중 하나는 평가 도구의 강점과 연구 설계의 강점을 구별하는 것이 었습니다. 예를 들어, 사용 된 도구는 매우 강할 수 있지만 평가 설계가 너무 약해서 연구 결과에서 강한 결론을 이끌어 내기 위해 측정 강도가 설계의 약점을 극복 할 수 없었습니다.

One of the most significant challenges in completing this review was distinguishing between the strength of the assessment instruments and the strength of the study designs. For example, the tool used might be very strong but the evaluation design was so weak that the strength of the measure could not overcome the weakness in the design in terms of drawing strong conclusions from the study findings.


교육 연구에서도 엄격한 연구 설계를 적용 할 가능성이 있지만 타당화 방법은 항상 도구 개발연구에서 설명한 것만 큼 강하지는 않았습니다. 그러나 독자가 강력한 평가 설계를 채택한 교육 연구에서 결론을 도출하더라도, 현실에서는 디자인은 사용한 척도measures만큼만 우수합니다.

Although educational research articles were also likely to apply rigorous study designs, their validation approaches were not always as robust as those described in instrument development articles. This finding is worrisome as readers may draw conclusions from educational research that employs a strong evaluation design, when in reality the design is only as good as the measures used.


또한 커리큘럼 평가 연구는 타당도가 입증된 도구를 사용할 가능성이 낮고, 흔하게 약한 연구 방법을 포함하는 것으로 밝혀졌습니다. 연구자들 그들이 사용하는 평가 설계 또는 평가 방법이 차선책 인 경우 교육 과정 접근에 대한 강력한 증거를 생성 할 수 없습니다. 따라서 여기서 중요한 발견은 교육 연구 및 커리큘럼 평가를 대표할 수 있는, 양적 및 질적 연구에서의 잘 검증 된 도구를 사용해야한다는 것입니다.

Even more concerning is our finding that curriculum assessment studies were the least likely to include validated instruments and frequently used weak research methods. Researchers cannot generate strong evidence for curricular approaches if the evaluation designs or assessment measures they use are suboptimal. Thus, an important finding from our work is the need for the use of well-validated instruments in quantitative and qualitative studies that represent both educational research and curriculum evaluation.


1 Institute of Medicine. Improving Medical Education: Enhancing the Behavioral and Social Science Content of Medical School Curricula. Washington, DC: National Academies Press; 2004.






 2016 May;91(5):730-42. doi: 10.1097/ACM.0000000000001090.

Tools to Assess Behavioral and Social Science Competencies in Medical Education: A Systematic Review.

Author information

1
P.A. Carney is professor of family medicine and of public health and preventive medicine, Oregon Health & Science University School of Medicine, Portland, Oregon. R.T. Palmer is assistant professor of family medicine, Oregon Health & Science University School of Medicine, Portland, Oregon. M.F. Miller is senior research assistant, Department of Family Medicine, Oregon Health & Science University School of Medicine, Portland, Oregon. E.K. Thayer is research assistant, Department of Family Medicine, Oregon Health & Science University School of Medicine, Portland, Oregon. S.E. Estroff is professor, Department of Social Medicine, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina. D.K. Litzelman is D. Craig Brater Professor of Medicine and senior director for research in health professions education and practice, Indiana University School of Medicine, Indianapolis, Indiana. F.E. Biagioli is professor of family medicine, Oregon Health & Science University School of Medicine, Portland, Oregon. C.R. Teal is assistant professor, Department of Medicine, and director, Educational Evaluation and Research, Office of Undergraduate Medical Education, Baylor College of Medicine, Houston, Texas. A. Lambros is active emeritus associate professor, Social Sciences & Health Policy, Wake Forest School of Medicine, Winston-Salem, North Carolina. W.J. Hatt is programmer analyst, Department of Family Medicine, Oregon Health & Science University School of Medicine, Portland, Oregon. J.M. Satterfield is professor of clinical medicine, University of California, San Francisco, School of Medicine, San Francisco, California.

Abstract

PURPOSE:

Behavioral and social science (BSS) competencies are needed to provide quality health care, but psychometrically validated measures to assess these competencies are difficult to find. Moreover, they have not been mapped to existing frameworks, like those from the Liaison Committee on Medical Education (LCME) and Accreditation Council for Graduate Medical Education (ACGME). This systematic review aimed to identify and evaluate the quality of assessment tools used to measure BSS competencies.

METHOD:

The authors searched the literature published between January 2002 and March 2014 for articles reporting psychometric or other validity/reliability testing, using OVID, CINAHL, PubMed, ERIC, Research and Development Resource Base, SOCIOFILE, and PsycINFO. They reviewed 5,104 potentially relevant titles and abstracts. To guide their review, they mapped BSS competencies to existing LCME and ACGME frameworks. The final included articles fell into three categories: instrument development, which were of the highest quality; educational research, which were of the second highest quality; and curriculum evaluation, which were of lower quality.

RESULTS:

Of the 114 included articles, 33 (29%) yielded strong evidence supporting tools to assess communication skills, cultural competence, empathy/compassion, behavioral health counseling, professionalism, and teamwork. Sixty-two (54%) articles yielded moderate evidence and 19 (17%) weak evidence. Articles mapped to all LCME standards and ACGME core competencies; the most common was communication skills.

CONCLUSIONS:

These findings serve as a valuable resource for medical educators and researchers. More rigorous measurement validation and testing and more robust study designs are needed to understand how educational strategies contribute to BSS competency development.

PMID:
 
26796091
 
PMCID:
 
PMC4846480
 [Available on 2017-05-01]
 
DOI:
 
10.1097/ACM.0000000000001090


+ Recent posts