전문가적 역량의 정의와 평가

Defining and Assessing Professional Competence 

Ronald M. Epstein, MD; Edward M. Hundert, MD

Author Affiliations: Departments of Family Medicine (Dr Epstein), Psychiatry (Drs Epstein and Hundert), and Medical Humanities (Dr Hundert), University of Rochester School of Medicine and Dentistry, Rochester, NY.




Medical schools, postgraduate training programs, and licensing bodies conduct assessments to certify the competence of future practitioners, discriminate among candidates for advanced training, provide motivation and direction for learning, and judge the adequacy of training programs. Standards for professional competence delineate key technical, cognitive, and emotional aspects of practice, including those that may not be measurable.1,2 However, there is no agreed-upon definition of competence that encompasses all important domains of professional medical practice. In response, the Accreditation Council for Graduate Medical Education defined 6 areas of competence and some means of assessing them3: 

    • patient care (including clinical reasoning), 
    • medical knowledge, 
    • practice-based learning and improvement (including information management), 
    • interpersonal and communication skills, 
    • professionalism, and 
    • systems-based practice (including health economics and teamwork).3




DEFINING PROFESSIONAL COMPETENCE


저자들의 정의

Building on prior definitions,1- 3 we propose that professional competence is the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and community being served. 

      • 토대Competence builds on a foundation of basic clinical skills, scientific knowledge, and moral development. 
      • 구성It includes 
        • 인지적 기능 a cognitive function—acquiring and using knowledge to solve real-life problems; 
        • 통합적 기능 an integrative function—using biomedical and psychosocial data in clinical reasoning; 
        • 관계적 기능 a relational function—communicating effectively with patients and colleagues; and 
        • 정동적/도덕적 기능 an affective/moral function—the willingness, patience, and emotional awareness to use these skills judiciously and humanely (BOX 1). 
      • 마음의 습관에 따라 달라짐 Competence depends on habits of mind, including attentiveness, critical curiosity, self-awareness, and presence. 
      • 발전하는 것이며, 영구적이지 않고, 맥락의존적 Professional competence is developmental, impermanent, and context-dependent.





지식의 습득과 활용

Acquisition and Use of Knowledge

EBM은 Explicit한 것 Evidence-based medicine is an explicit means for generating an important answerable question, interpreting new knowledge, and judging how to apply that knowledge in a clinical setting.4 그러나 competence는 tacit knowledge에 의해서 정의됨 But Polanyi5 argues that competence is defined by tacit rather than explicit knowledge. 

Tacit knowledge is that which we know but normally do not explain easily, including the informed use of heuristics (rules of thumb), intuition, and pattern recognition. 

The assessment of evidence-based medicine skills is difficult because many of the heuristics used by novices are replaced by shortcuts in the hands of experts,6 as are other clinical skills.7


Personal knowledge는 경험을 통해 쌓는 것이지만, 경험이 곧바로 학습과 역량으로 연결되지는 않기 때문에 cognitive and emotional self-awareness가 필요하다. 

Personal knowledge is usable knowledge gained through experience.8 Clinicians use personal knowledge when they observe a patient's demeanor (such as a facial expression) and arrive at a provisional diagnosis (such as Parkinson disease) before eliciting the specific information to confirm it. Because experience does not necessarily lead to learning and competence,9 cognitive and emotional self-awareness is necessary to help physicians question, seek new information, and adjust for their own biases.


Integrative Aspects of Care

Professional competence 는 고립된 역량들과는 다르다. 한 질환을 가진 환자를 치료하는데 필요한 개별적 지식과 술기를 갖춘 학생이라도 그 환자를 잘 볼 수 있는 것은 아니다.

Professional competence is more than a demonstration of isolated competencies10; "when we see the whole, we see its parts differently than when we see them in isolation."11 


역량있는 의사는 통합적으로 사고하고, 느끼고, 행동할 수 있어야 한다.

A competent clinician possesses the integrative ability to think, feel, and act like a physician.6,12- 15 Schon16 argues that professional competence is more than factual knowledge and the ability to solve problems with clear-cut solutions: it is defined by the ability to manage ambiguous problems, tolerate uncertainty, and make decisions with limited information.


역량은 과학, 임상, 인문학에 대한 전문가적 판단을 임상추론에 활용할 수 있느냐에 달려있다.

Competence depends on using expert scientific, clinical, and humanistic judgment to engage in clinical reasoning.14,15,17,18 Although expert clinicians often use pattern recognition for routine problems19 and hypothetico-deductive reasoning for complex problems outside their areas of expertise, expert clinical reasoning usually involves working interpretations12 that are elaborated into branching networks of concepts.20- 22 


Building Therapeutic Relationships

환자-의사 관계는 건강과 질병의 회복, 비용, 만성질환에 의한 결과 등에 영향을 준다.

The quality of the patient-physician relationship affects health and the recovery from illness,23,24 costs,25 and outcomes of chronic diseases26- 29 by altering patients' understanding of their illnesses and reducing patient anxiety.26 Key measurable patient-centered28 (or relationship-centered)30,31 behaviors include responding to patients' emotions and participatory decision making.29


의학적 과오는 종종 개개인의 실수가 아닌 시스템의 실패에서 기인한다. 따라서 팀워크에 대한 평가와 기관의 자체평가가 개인이 대한 평가를 보완할 수 있다.

Medical errors are often due to the failure of health systems rather than individual deficiencies.32- 34 Thus, the assessment of teamwork and institutional self-assessment might effectively complement individual assessments.


Affective and Moral Dimensions

동료나 환자에 의해서 더 잘 평가될 수도 있다.

Moral and affective domains of practice may be evaluated more accurately by patients and peers than by licensing bodies or superiors.35 (...) Recent neurobiological research indicates that the emotions are central to all judgment and decision making,13 further emphasizing the importance of assessing emotional intelligence and self-awareness in clinical practice.1,40- 42


Habits of Mind

여러가지가 있겠으나 객관화하기가 어렵다. 의학에서의 과오는 의구심을 건너뛴 스스로에 대한 과신에서 나온다.

Competence depends on habits of mind that allow the practitioner to be attentive, curious, self-aware, and willing to recognize and correct errors.43 Many physicians would consider these habits of mind characteristic of good practice, but they are especially difficult to objectify. (...) Errors in medicine, according to this view, may result from overcertainty that one's impressions are beyond doubt.41,43,44


Context

역량은 맥락 의존적이다. 개인의 능력, 업무, 의료시스템의 환경과 임상상황의 관계에 달려있다. 

Competence is context-dependent. Competence is a statement of relationship between an ability (in the person), a task (in the world),45 and the ecology of the health systems and clinical contexts in which those tasks occur.46,47 This view stands in contrast to an abstract set of attributes that the physician possesses—knowledge, skills, and attitudes—that are assumed to serve the physician well in all the situations that he or she encounters. (...)


Development

역량은 발전하는 것이다. 각 수련의 단계마다 역량의 어떤 측면이 획득되는지에 대해서는 논쟁이 있다. 따라서 임상의사와 학생의 평가가 어떻게 달라야 하는지에 대한 질문이 생긴다. 어떻게, 어떤 수련의 단계에서 환자-의사 관계에 대한 것을 평가해야하는지를 결정하는 것도 쉽지 않다. 의료의 변화에 의해서도 역량을 재정의 해야하기도 한다. 

Competence is developmental. There is debate about which aspects of competence should be acquired at each stage of training. (...) which raises the question of whether assessment of practicing physicians should be qualitatively different from the assessment of a student. Determining how and at what level of training the patient-physician relationship should be assessed is also difficult. (...) Changes in medical practice and the context of care invite redefinitions of competence; for example, the use of electronic communication media48 and changes in patient expectations.49,50




CURRENT MEANS OF ASSESSMENT


평가는 다음의 측면에서 봐야 한다. 

Assessment must take into account what is assessed, how it is assessed, and the assessment's usefulness in fostering future learning. In discussing validity of measures of competence in an era when reliable assessments of core knowledge, abstract problem solving, and basic clinical skills have been developed,45,51- 56 we must now establish that they encompass the qualities that define a good physician: the cognitive, technical, integrative, contextual, relational, reflective, affective, and moral aspects of competence. We distinguish between expert opinion, intermediate outcomes, and the few studies that show associations between results of assessments and actual clinical performance.57- 60


어떻게 평가의 과정에서 미래의 학습을 촉진할 수 있는지 고려해야 한다.

We consider how the process of assessment might foster future learning. Too often, practitioners select educational programs that are unlikely to influence clinical practice.61 Good assessment is a form of learning and should provide guidance and support to address learning needs. Finally, we address concerns that the medical profession still lacks adequate accountability to the public62 and has not done enough to reduce medical errors.32,63


평가의 각 영역에 대해서 네 가지 레벨이 있다.

Within each domain of assessment, there are 4 levels at which a trainee might be assessed (Figure 1).64 

        • The knows level refers to the recall of facts, principles, and theories. 
        • The knows how level involves the ability to solve problems and describe procedures. 
        • The shows how level usually involves human (standardized patient), mechanical, or computer simulations that involve demonstration of skills in a controlled setting. 
        • The does level refers to observations of real practice. 

For each of these levels, the student can demonstrate the ability to imitate or replicate a protocol, apply principles in a familiar situation, adapt principles to new situations, and associate new knowledge with previously learned principles.65






METHODS


Summary of Studies

가장 많이 쓰이는 세가지 평가법

The 3 most commonly used assessment methods are 

subjective assessments by supervising clinicians, 

multiple-choice examinations to evaluate factual knowledge and abstract problem solving,66 and 

standardized patient assessments of physical examination and technical and communication skills.67- 69 


다음의 것들은 평가가 잘 되고 있지 않다.

Although curricular designs increasingly integrate core knowledge and clinical skills, most assessment methods evaluate these domains in isolation. Few assessments use measures such as participatory decision making70 that predict clinical outcomes in real practice. Few reliably assess clinical reasoning, systems-based care, technology, and the patient-physician relationship.3,69 The literature makes important distinctions between criteria for licensing examinations and program-specific assessments with mixed formative and summative goals.


지식과 문제해결능력 평가(MCQ)

Evaluation of factual knowledge and problem-solving skills by using multiple-choice questions offers excellent reliability71- 75 and assesses some aspects of context and clinical reasoning. (...) Standardized test scores have been inversely correlated with empathy, responsibility, and tolerance.83 Also, because of lack of expertise and resources, few medical school examinations can claim to achieve the high psychometric standards of the licensing boards.


OSCE 개요

The Objective Structured Clinical Examination (OSCE) is a timed multistation examination often using standardized patients (SPs) to simulate clinical scenarios. The roles are portrayed accurately56,84 and simulations are convincing; the detection rate of unannounced SPs in community practice is less than 10%.57,59,85- 89 Communication, physical examination, counseling, and technical skills can be rated reliably if there is a sufficiently large number of SP cases67,90- 100 and if criteria for competence are based on evidence.101 Although few cases are needed to assess straightforward skills, up to 27 cases may be necessary to assess interpersonal skills reliably in high-stakes examinations.102,103 Although SPs' ratings usually correlate with those of real patients,104 differences have been noted.105- 107


OSCE Pass/Fail 정하기

Defining pass/fail criteria for OSCEs has been complex.54,108- 111 There is debate about who should rate student performance in an OSCE.112 Ratings by the SP are generally accurate52 but may be hampered by memory failure, whereas external raters, either physicians or other SPs, may be less attuned to affective aspects of the interview and significantly increase the cost of the examination.


체크리스트

Checklist scores completed by physician-examiners in some studies improve with expertise of the examinees113 and with the reputation of the training program.90,114 But global rating scales of interpersonal skills may be more valid than behavioral checklists.7,115,116 The OSCE scores may not correlate with multiple-choice examinations and academic grades,90,100,117 suggesting that these tools measure different skills. Clinicians may behave differently in examination settings than in real practice,106,118 and short OSCE stations can risk fragmentation and trivialization of isolated elements of what should be a coherent whole.119 The OSCE also has low test reliability for measuring clinical ethics.120


does level에서는 validated strategies가 별로 없음.

There are few validated strategies to assess actual clinical practice, or Miller's does level. Subjective evaluation by residents and attending physicians is the major form of assessment during residency and the clinical clerkships and often includes the tacit elements of professional competence otherwise overlooked by objective assessment instruments. Faculty ratings of humanism predicted patient satisfaction in one study.121 However, evaluators often do not observe trainees directly. They often have different standards122,123 and are subject to halo effects124 and racial and sex bias.125,126 Because of interpatient variability and low interrater reliability, each trainee must be subject to multiple assessments for patterns to emerge. Standardized rating forms for direct observation of trainees127- 132 and structured oral examination formats have been developed in response to this criticism.133,134


Profiling by managed-care databases 

Profiling by managed-care databases is increasingly used as an evaluation measure of clinical competence. However, data abstraction is complex140 and defining competence in terms of cost and value is difficult. The underlying assumptions driving such evaluation systems may not be explicit. For example, cost analyses may favor physicians caring for more highly educated patients.141


동료평가

Peer ratings are accurate and reliable measures of physician performance.77,142 Peers may be in the best position to evaluate professionalism; people often act differently when not under direct scrutiny.143 Anonymous medical student peer assessments of professionalism have raised awareness of professional behavior, fostered further reflection, helped students identify specific mutable behaviors, and been well accepted by students.35 Students should be assessed by at least 8 of their classmates. The composite results should be edited to protect the confidentiality of the raters.


자기평가

Self-assessments have been used with some success in standardized patient exercises144 and in programs that offer explicit training in the use of self-assessment instruments.145 Among trainees who did not have such training, however, self-assessment was neither valid nor accurate. Rather, it was more closely linked to the trainee's psychological sense of self-efficacy and self-confidence than to appropriate criteria, even among bright and motivated individuals.




COMMENT


Assessment serves personal, institutional, and societal goals (BOX 2). Distinctions between these goals often are blurred in practice. 


Performance는 직접 측정이 가능하나 역량은 inferred quality이다. 

Whereas performance is directly measurable, competence is an inferred quality.148 

      • Performance on a multiple-choice test may exceed competence, as in the case of a trainee with a photographic memory but poor clinical judgment. 
      • Conversely, competence may exceed test performance, as in the case of a trainee with severe test anxiety.

Correlation with National Board scores and feedback on graduates' performance can be useful in validating some assessment instruments but should be done with caution. For example, efficiency is highly valued in residents but less so in medical students.





Future Directions



Comprehensive assessments link content across several formats. 

        • Postencounter probes immediately after SP exercises using oral, essay, or multiple-choice questions test pathophysiology and clinical reasoning in context.151,152 
        • Triple-jump exercises152consisting of a case presentation, an independent literature search, and then an oral or written postencounter examination—test the use and application of the medical literature. 
        • Validated measures of reflective thinking153 have been developed that use patient vignettes followed by questions that require clinical judgment. These measures reflect students' capacity to organize and link information; also, they predict clinical reasoning ability 2 years later.153 


Combining formats appears to have added value with no loss in reliability.150,154 Ongoing educational outcomes research will show whether composite formats help students learn how to learn more effectively, develop habits of mind that characterize exemplary practice,43 and provide a more multidimensional picture of the examinee than the individual unlinked elements.



Well-functioning health systems are characterized by continuity, partnership between physicians and patients, teamwork between health care practitioners, and communication between health care settings.156,157 

        • The use of time in a continuity relationship can be assessed with a series of SP or real-patient exercises. 
        • To assess partnership, patient assessment, currently used to assess physicians in practice,158 is being tested for students and residents.159,160 These efforts are guided by data showing that patients' ratings of communication and satisfaction correlate well with biomedical outcomes,24,29 emotional distress,161 health care use,25 and malpractice litigation.162 Patient ratings also have the potential to validate other measures of competence.163 
        • Several institutions assess teamwork by using peer assessments. Others use sophisticated mannequins to simulate acute cardiovascular physiological derangements found in intensive care settings164- 169; trainees are graded on teamwork as well as individual problem solving, and statistical adjustments can account for team composition. 
        • Communication between health settings could be assessed at the student level, for example, by grading of their written referral letters.170











 2002 Jan 9;287(2):226-35.

Defining and assessing professional competence.

Abstract

CONTEXT:

Current assessment formats for physicians and trainees reliably test core knowledge and basic skills. However, they may underemphasize some important domains of professional medical practice, including interpersonal skills, lifelong learning, professionalism, and integration of core knowledge into clinical practice.

OBJECTIVES:

To propose a definition of professional competence, to review current means for assessing it, and to suggest new approaches to assessment.

DATA SOURCES:

We searched the MEDLINE database from 1966 to 2001 and reference lists of relevant articles for English-language studies of reliability or validity of measures of competence of physicians, medical students, and residents.

STUDY SELECTION:

We excluded articles of a purely descriptive nature, duplicate reports, reviews, and opinions and position statements, which yielded 195 relevant citations.

DATA EXTRACTION:

Data were abstracted by 1 of us (R.M.E.). Quality criteria for inclusion were broad, given the heterogeneity of interventions, complexity of outcome measures, and paucity of randomized or longitudinal study designs.

DATA SYNTHESIS:

We generated an inclusive definition of competence: the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community being served. Aside from protecting the public and limiting access to advanced training, assessments should foster habits of learning and self-reflection and drive institutional change. Subjective, multiple-choice, and standardized patient assessments, although reliable, underemphasize important domains of professional competence: integration of knowledge and skills, context of care, information management, teamwork, health systems, and patient-physician relationships. Few assessments observe trainees in real-life situations, incorporate the perspectives of peers and patients, or use measures that predict clinical outcomes.

CONCLUSIONS:

In addition to assessments of basic skills, new formats that assess clinical reasoning, expert judgment, management of ambiguity, professionalism, time management, learning strategies, and teamwork promise a multidimensional assessment while maintaining adequate reliability and validity. Institutional support, reflection, and mentoring must accompany the development of assessment programs.

Comment in


+ Recent posts