교육프로그램 평가(AMEE Guide no.29) (Med Teach, 2006)
AMEE Education Guide no. 29: Evaluating educational programmes
JOHN GOLDIE
Department of General Practice, University of Glasgow, UK
도입
Introduction
평가는 교육활동의 implementation and develop-ment 에 필수적이다. 영국에서 주류일반교육에서 평가의 관점이 최근 development에서 교사의 책무성과 교사평가로 옮겨갔다.
Evaluation is integral to the implementation and develop-ment of educational activities, whether national programmes,an individual school’s curriculum or a piece of work undertaken by a teacher with his/her students. In the UK the focus of evaluation in mainstream general education has seen a recent shift from development towards teacher accountability and appraisal (Kelly, 1989).
평가란?
What is evaluation?
Collins 영어사전에는 이렇게 되어있다. ‘‘the act of judgement of the worth of ...’’. 이처럼 평가는 본질적으로 가치를 수반하는 활동value-laden이다. 그러나 초기의 평가자들은 'value'에 관심을 두지 않았는데, 아마도 그들은 평가활동이 value-free할 수 있고, 그래야 한다고 생각했기 때문일 것이다. 모든 평가는 평가(평가자)의 목적/관점/신념에 따라 달라진다. social programming의 정치적 세계에서 value 없이 선택을 한다는 것은 불가능하며, 평가준거/수행능력기준/가중치판단 등에 있어서 중요해지고 있다.
Evaluation is defined in the Collins English Dictionary (1994)as ‘‘the act of judgement of the worth of ...’’. As such it is an inherently value-laden activity. However, early evaluators paid little attention to values, perhaps because they naively believed their activities could, and should, be value free(Scriven, 1983). The purpose(s) of any scheme of evaluation often vary according to the aims, views and beliefs of the person or persons making the evaluation. Experience has shown it is impossible to make choices in the political world of social programming without values becoming important in choices regarding evaluative criteria, perfor-mance standards, or criteria weightings (Shadish et al., 1991).
평가자의 가치는 종종 평가의 정의에서 드러나기도 한다.
The values of the evaluator are often reflected in some of the definitions of evaluation which have emerged
-
Gronlund (1976), influenced by Tyler’s goal-based conception of evaluation, described it as ‘‘the systematic process of determining the extent to which instructional objectives are achieved’’.
-
Cronbach (Cronbachal., 1980), through reflection on the wider field of a set evaluation and influenced by his view of evaluators educators, defined evaluation as ‘‘an examination conducted to assist in improving a programme and other programmes having the same general purpose’’.
종종 assessment와 상호교환적으로 사용되곤 한다.
In education the term evaluation is often used interchangeably with assessment, particularly in North America. While
-
assessment is primarily concerned with the measurement of student performance,
-
evaluation is generally understood to refer to the process of obtaining information about a course or programme of teaching for subsequent judgement and decision-making (Newble & Cannon, 1994).
Mehrens 은 assessment의 두 가지 목적을 아래와 같이 설명함.
Mehrens (1991) identified two of the purposes of assessment as:
-
1. to evaluate the teaching methods used;
-
2. to evaluate the effectiveness of the course.
따라서 assessment는 evaluation의 한 부분집합이라고 볼 수도 있으며, 프로그램에 관한 정보수집의 source로서 사용될 수 있다.
Assessment can, therefore, be looked upon as a subset of evaluation, its results potentially being used as a source of information about the programme.
평가의 역사
History of evaluation
Planned social evaluation 은 기원전 2200년 중국의 관료선발에부터 있어왔다.
Planned social evaluation has been noted as early as 2200 BC,with personnel selection in China (Guba & Lincoln, 1981).Evaluations chronicled during the last al., 200 1980; years have also al., been1983;(Cronbach Madaus et et Rossi & Freeman, 1985).
Modern evaluation은...아래의 것에 근간을 두고 있다.
Modern evaluation theories and practices, however, have their intellectual roots in the work of
-
Tyler (1935) in education,
-
Lewin (1948) in social psychology, and
-
Lazarfield (Lazarsfeld & Rosenberg, 1955)in sociology.
이차대전 이후, 미국. 많은 자금의 투입
The main stimulus to the development of modern evaluation theories and practices, however, was the post-Second World War rapid economic growth in the Western world, particularly the United States, and the interventionist role taken by governments in social policy during the 1960s. With the increasing amounts of money being spent on social programmes there was the growing recognition that these programmes required proper evaluation, and mandatory evaluation was introduced.
초반에는 방법론에 집중
The earliest evaluation theorists, with little experience to reflect on, concentrated on methodology.
다양해짐. 프로세스에도 관심갖기 시작. 질적방법 활용
Reflection on increasing experience led to the diversification and change of evaluation theories and practice. There was no longer an exclusive reliance on comparative, outcome studies. The quality of programme implementation and the causal processes mediating programme impact were also considered (Sechrest et al., 1979). This resulted in the greater use of qualitative methods in evaluation (Guba & Lincoln, 1981).
오랜 기간 평가는 학생들이 시험 보는 것과 같은 의미였음. 타일러, 블룸등은 선형적/위계적/객관적 모델을 주도함. 이러한 '산업화적' 접근으로부텉 다양한 시도가 이루졌고 formal evaluation strategies가 발달함.
In education, evaluation for centuries had been mainly equated with student testing (Popham, 1988). Tyler, argued that a programme should be judged on the extent to which students obtained mastery of the programme’s pre-stated objectives. Tyler’s work, (1962) together with the that of Bloom(1956) and Taba led to development of the linear, hierarchical, objectives model of curriculum planning, with its structure of aims-learning experiences–content–organization of learning-evaluation. This ‘indus-trial’ approach to curriculum planning influenced many of the attempts at curriculum evaluation in the 1960s, and also influenced the development of formal evaluation strategies (Holt, 1981).
Cronbach의 주장: 개발과정에 필요한 판단에 초점을 두고, 개발자들에게 가치가 있어야 함.
Cronbach, in his 1963 article ‘Course improvement through evaluation’, argued that if evaluation was to be on of the value to curriculum developers it should focus decisions they faced during the development phase of their curricula. He also argued that evaluation should deal less with comparisons between programmes, and more with the degree to which the programme promoted its desired purpose(s).
더 많은 비용이 투입되면서 평가가 의무화되었고, 모든 커리큘럼 프로젝트에 대해서 formal evaluation은 필수적인 요건이 되었다.
As with social programming in general, with the increasing sums being spent on educational programmes in the United States mandatory evaluation was introduced. The requirement for mandatory evaluation of curriculum innovation crossed the Atlantic and formal evaluation was made an essential requirement of all curriculum projects by such funding bodies as the Schools Council (Kelly, 1989).
1970년대 초반, 평가가 교육프로그램을 더 효과적으로 만들줄 것이라는 믿음이 자라나고 있었으나, 이 optimism은 오래가지 못했다. 대부분의 중요한 교육적 결정에서 '근거'의 역할은 minor한 것에 지나지 않았고 그보다 주로 political, interpersonal milieu가 영향을 주었다.
By the early 1970s the field had grown rapidly, There was growing belief in the power of evaluation to transform poor educational programmes into highly effective programmes, and of the importance of evaluation results to decision-makers. However, this optimism of the early 1970s did not last. Experience showed that most educational decisions of importance, continued to be taken in political, interpersonal milieu, where evidence plays a minor role (Popham, 1988).
의사결정 프로세스의 political nature를 깨달은 뒤, 교육평가자들은 '교육자로서 평가자'라는 Cronbach의 관점을 인정하기 시작했다. 한 명의 의사결정자를 만족시키는 것이 아니라, "관련된 political community에 정보를 제공한다informing'는 것에 초점을 두어야 한다는 것이다. 그들은 또한 많은 평가 시도들이 잘 작동하지 않았지만, 소수는 작동했고, 퀄리티가 varying degree로 향상되었다는 것을 알게 되었다. Improvement는, 비록 조금일지라도, 가치있는 것이라고 인식하였다.
With the realization of the political nature of the decision-making process, educational evaluators began to embrace Cronbach’s view of the evaluator as an educator, in that he/she should rarely attempt to focus his/her efforts on satisfying a single decision-maker, but should focus those efforts on ‘‘informing the relevant political community’’(Cronbach, 1982b). They also realized that, while many of their attempts at evaluation did not work, some did and when they worked programme quality improved to varying degrees. Improvement, even when modest, was recognized to be valuable (Popham, 1988).
프로그램 평가
Effecting programme evaluation
시작하기
Initiation/commissioning
평가하기로 결정하는 것이 첫 단계. 목적을 정해야 한다.
The initial stage of evaluation is where the institutions or individuals responsible for a programme take the decision to evaluate it. They must decide on the purpose(s) of the evaluation, and who will be responsible for undertaking it.
Table 1. Common reasons for understanding evaluation and common areas of evaluation activity (after Muraskin 1998).
Chelimsky & Shadish 는 평가의 목적으로 세 가지를 말했다.
Chelimsky & Shadish (1997) suggest that the purposes of evaluation, along with the questions evaluators seek to answer, fall into three general categories:
-
1. 책무성을 위한 평가 evaluation for accountability;
-
2. 지식을 위한 평가 evaluation for knowledge;
-
3. 개발을 위한 평가 evaluation for development.
Coles & Grant 는 다양한 분야의 스킬이 필요함을 주장했다.
In order to produce an effective educational evaluation, Coles & Grant (1985) point out that skills from many disciplines, for example psychology,sociology, philosophy, statistics, politics and economics, may be required.
평가자의 역할 정의
Defining the evaluator’s role
This is important to establish a sit will influence the decision-making process on the goals of the evaluation, and on the methodology to be used.
평가의 윤리
The ethics of evaluation
평가의 윤리는 평가자만의 책임이 아니다. 평가를 지원sponsor하는 사람과 청중 모두 그 책임을 공유한다.
The ethics of an evaluation, however, are not the sole responsibility of the evaluator(s). Evaluation sponsors,participants and audiences share ethical responsibilities.
House는 다섯 가지 윤리적 실수를 언급했다.
House (1995) lists five ethical fallacies of evaluation:
-
1. Clientism—the fallacy that doing whatever the client requests or whatever will benefit the client is ethically correct.
-
2. Contractualism—the fallacy that the evaluator is obliged to follow the written contract slavishly, even if doing so is detrimental to the public good.
-
3. Methodologicalism—the belief that following acceptable inquiry methods assures that the evaluator’s behaviour will be ethical, even when some methodologies may actually compound the evaluator’s ethical dilemmas.
-
4. Relativism—the fallacy that opinion data the evaluator collects from various participants must be given equal weight, as if there is no bias for appropriately giving the opinions of peripheral groups less priority than that given to more pivotal groups.
-
5. Pluralism/Elitism—the fallacy of allowing powerful voices to be given higher priority, not because they merit such priority, but merely because they hold more prestige and potency than the powerless or voiceless.
이를 바탕으로 Worthen 등은 아래의 기준을 제안했다.
Drawing on these, Worthen et al. (1997) have suggested the following standards could be applied:
-
1. Service orientation—evaluators should serve not only the interests of the individuals or groups sponsoring the evaluation, but also the learning needs of the programme participants, community and wider society.
-
2. Formal agreements—these should go beyond producing technically adequate evaluation procedures to include such issues as following protocol, having access to data, clearly warning clients about the evaluation’s limitations and not promising too much.
-
3. Rights of human subjects—these include obtaining informed consent, maintaining rights to privacy and assuring confidentiality. They also extend into respecting human dignity and worth in all interactions so that no participants are humiliated or harmed.
-
4. Complete and fair assessment—this aims at assuring that both the strengths and weaknesses of a programme are accurately portrayed.
-
5. Disclosure of findings—this reflects the evaluator’s responsibility to serve not only his/her client or sponsor, but also the broader public(s) who supposedly benefit from both the programme and its accurate evaluation.
-
6. Conflict of interest—this cannot always be resolved. However, if the evaluator makes his/her values and biases explicit in an open and honest way clients can be aware of potential biases.
-
7. Fiscal responsibility—this includes not only the responsibility of the evaluator to ensure all expenditures are appropriate, prudent and well documented, but also the hidden costs for personnel involved in the evaluation.
그러나 평가자들의 다양한 교육적 배경과 소속이 다양하고 종종 상충하는 ethical code를 따라야만 하게끔 만든다. 이러한 다양성pluralistic으로 인해서 아직 consensus ethical code는 없다.
However, the various educational backgrounds and professional affiliations of evaluators can result in them practising under several different and potentially conflicting ethical codes (Love, 1994). Given the pluralistic nature of those involved in evaluation, and the wider society, it is little wonder a consensus ethical code has not yet emerged.
질문 고르기
Choosing the questions to be asked
Shadish 이 다양한 질문세트를 제안함
Shadish et al. (1991) supply a useful set of questions for evaluators to ask when starting an evaluation. These cover the five components of evaluation theory and provide a sound practical basis for evaluation planning (boxes 1–5).
Box 1: Questions to ask about Social programming
Box 2: Questions to ask about use
Box 3: Questions to ask about knowledge construction
Box 4: Questions to ask about valuing
Box 5: Questions to ask about evaluation practice
평가 설계
Designing the evaluation
평가의 차원: Stake는 여덟개의 차원을 제안함
Dimensions of evaluation.
Stake (1976) suggested eight dimensions along which evaluation methods may vary:
-
(1) Formative–summative: This distinction was first made by Scriven (1967). Formative evaluation is undertaken during the course of a programme with a view to adjusting the materials or activities. Summative evaluation is carried out at the end of a programme. In the case of an innovative programme it may be difficult to determine when the end has been reached, and often the length of time allowed before evaluation takes place will depend on the nature of the change.
-
(2) Formal–informal: Informal evaluation is undertaken naturally and spontaneously and is often subjective. Formal evaluation is structured and more objective.
-
(3) Case particular–generalization: Case-particular evaluation studies only one programme and relates the results only to that programme. Generalization may study one or more programmes, but allow results to be related to other programmes of the same type. In practice results may lend themselves to generalization, and the attempt to formulate rules for case study recognizes that generalizing requires greater control, and more regard to setting and context (Holt, 1981).
-
(4) Product–process: This distinction mirrors that of the formative–summative dimension. In recent years evaluators have been increasingly seeking information in the additional area of programme impact.
-
(a) Process information: In this dimension information is sought on the effectiveness of the programme’s materials and activities. Often the materials are examined during both programme development and implementation. Examination of the implementation of programme activities documents what actually happens, and how closely it resembles the programme’s goals. This information can also be of use in studying programme outcomes.
-
(b) Outcome information: In this dimension information is sought on the short-term or direct effects of the programme on participants. In medical education the effects on participants’ learning can be categorized as instructional or nurturant. The method of obtaining information on the effects of learning will depend on which category of learning outcome one attempts to measure.
-
(c) Impact information: This dimension looks beyond the immediate results of programmes to identify longer-term programme effects.
-
(5) Descriptive–-judgmental: Descriptive studies are carried out purely to secure information. Judgmental studies test results against stated value systems to establish the programme’s effectiveness.
-
(6) Pre-ordinate–responsive: This dimension distinguishes between the situation where evaluators know in advance what they are looking for, and one where the evaluator is prepared to look at unexpected events that might come to light as he/she goes along.
-
(7) Holistic–analytic: This dimension marks the boundary between evaluations, which looks at the totality of a programme, from one that looks only at a selection of key characteristics.
-
(8) Internal–external: This separates evaluations using an institution’s own staff from those that are designed by, or which require to satisfy, outside agencies. Choosing the appropriate design
적절한 설계 고르기
Choosing the appropriate design
Shadish 는 평가이론이 도움이 된다고 했음
Shadish et al. (1991) advocate that evaluation theory can help tell us when, where and why some methods should be applied and others not.
Cronbach 는 평가자들이 방법을 고를 때 다방면에 걸쳐야한다eclectic해야 하며, 특정 방법에 과도하게 집착하면 안된다고 했다.
Cronbach (1982a) advises evaluators to be eclectic in their choice of methods, avoiding slavish adherence to any particular methods.
Rossi & Freeman 는 'good enough' 법칙을 주장했다. "평가자는 가능한 가장 좋은 디자인을 골라야 하며, 실용성과 실행가능성, 가용자원, 평가자의 전문성을 고려해야 한다."
Rossi & Freeman (1985) advocate the ‘good enough’ rule for choosing evaluation designs: ‘‘The evaluator should choose the best possible design, taking into account practicality and feasibility . . . the resources available and the expertise of the evaluator’’, a view echoed by Popham (1988).
Table 3. Common quantitative and qualitative methods and instruments for evaluation.
평가 계획과 수행 시 일곱 개의 가이드라인
He proposes seven technical guidelines for the evaluator in planning and conducting his/her evaluation:
-
(1) Identify the tasks to be done.
-
(2) Identify different options for doing each task.
-
(3) Identify strengths, biases and assumptions associated with each option.
-
(4) When it is not clear which of the several defensible options is least biased, select more than one to reflect different biases, avoid constant biases and overlook only the least plausible biases.
-
(5) Note convergence of results over options with different biases.
-
(6) Explain differences of results yielded by options with different biases.
-
(7) Publicly defend any decision to leave a task homogenous.
평가 접근법
Approaches to evaluation
Worthen 은 여섯 개 카테고리를 주장함
One of the most useful was developed by Worthen et al. (1997), influenced by the work of House (1976, 1983). They classify evaluation approaches into the following six categories:
-
(1) Objectives-oriented approaches—where the focus is on specifying goals and objectives and determining the extent to which they have been attained.
-
(2) Management-oriented approaches—where the central concern is on identifying and meeting the informational needs of managerial decision-makers.
-
(3) Consumer-oriented approaches—where the central issue is developing evaluative information on ‘products’, broadly defined, for use by consumers in choosing among competing products, services etc.
-
(4) Expertise-oriented approaches—these depend primarily on the direct application of professional expertise to judge the quality of whatever endeavour is evaluated.
-
(5) Adversary-oriented approaches—where planned opposition in points of view of different evaluators (for and against) is the central focus of the evaluation.
-
(6) Participant-oriented approaches—where involvement of participants (stakeholders in the evaluation) is central in determining the values, criteria, needs and data for the evaluation.
House의 utilitarian to intuitionist-pluralist 차원에 배치할 수 있다.
These categories can be placed along House’s (1983) dimension of utilitarian to intuitionist-pluralist evaluation (Figure 1).
Figure 1. Distribution of the six evaluation approaches on the utilitarian to intuitionist–pluralist evaluation dimension.
Fig1의 배치는 약간 arbitrary하다.
Placement along the dimension is to some degree arbitrary. As at evaluation is conducted multifaceted can be different phases of a programme’s deve-lopment, the same evaluation approach can be classified in diverse ways according to emphasis.
Table 4. Examples of approaches that predominantly fit into Worthen et al.’s categories (1997).
Table 5. These are considered under the following headings after Worthen et al. (1997):
결과의 해석
Interpreting the findings
자료를 모았으면 그 다음은 해석이다. Analysis and Explanation의 두 단계
Having collected the relevant data the next stage in evaluation involves its interpretation. Coles & Grant (1985)view this process as involving two separate, though closely related activities: analysis and explanation.
'관측의 오류'에 대한 인식이 늘어나면서 자료 수집에 다양한 방법을 사용하게 되었다. 같은 자료를 다수의 평가자가 분석하고, confirmatory보다는 exploratory mode로 분석한다.
Growing consciousness of the fallibility of observation is reflected in the growth of multiple methods of data collection,in having multiple investigators analyse the same data set, andin doing data analysis in more of an exploratory than confirmatory mode (Glymour & Scheines, 1986).
질적, 양적 방법을 모두 사용하면 창의적 긴장을 만들 수 있다.
When both qualitative and quantitative methods are used in the same study results can be generated that have different implications for the overall conclusion, leading to creative tension which may be resolved only after many iterations (Hennigan et al., 1980).
평가의 사회적 요소들, 평가방법의 오류를 인식하면서 평가질문과 방법을 비판적으로 검토하게 되었다.
Recognition of the social components of evaluation knowledge and the fallibility of evaluation methodologies has led to methods for critically scrutinizing evaluation questions and methods. These include:
-
commentaries on research plans by experts and stakeholders;
-
monitoring of the implementation of evaluations by government bodies and scientific advisory bodies;
-
simultaneous funding of independent evaluations of the same programme;
-
funding secondary analyses of collected data;
-
including comments in final reports by personnel from the programme evaluated; and
-
forcing out latent assumptions of evaluation designs and interpretations, often through some form of adversarial process or committees of substantive experts (Cook, 1974; Cronbach, 1982a).
Scriven 의 평가체크리스트
Scriven (1980) developed the Key Evaluation checklist, a list of dimensions and questions to guide evaluators in this task(Table 6).
Table 6. Key evaluation checklist.
평가 결과의 배포
Dissemination of the findings
어떻게, 누구에게 보고할 것인가
Again Shadish et al.’s questions on evaluation use (box 2)are of value in considering how, and to whom, the evaluation findings are to be reported.
다음을 고려하라 Coles & Grant
Coles & Grant (1985) list the following considerations:
-
(1) 청중의 선호 보고서 스타일 Different audiences require different styles of report writing.
-
(2) 청중의 관심사 The concerns of the audience should be reviewed and taken into account (even if not directly dealt with).
-
(3) 청중이 늘어나면 토론의 영역을 제한하거나, 일부 요점을 생략할 필요가 있다.
Wide audiences might require restricted discussion or omission of certain points. -
(4) 언어, 단어 사용 The language, vocabulary and conceptual framework of a report should be selected or clarified to achieve effective communication.
Hawkridge 는 결과 전파의 장애요인으로 다음을 꼽음
Hawkridge (1979) identified three possible barriers to successful dissemination of educational research findings:
-
(1) 청중의 FOR로 결과를 translate하는 것.
The problem of translating findings into frames of reference and language which the target audience can understand. However the danger in translating findings for a target audience is that the evaluator may as a result present the findings in a less than balanced manner. -
(2) 기득권에 위협이 되는 경우
If the findings are threatening to vested interests, they can often be politically manoeuvred out of the effective area. -
(3) '과학적', 실증주의적 접근. 아직 질적연구방법은 'soft'하다고 여겨지고 설득력이 낮다
The ‘scientific’, positivistic, approach to research still predominates in most academic institutions, which may view qualitative research methods and findings as ‘soft’, and be less persuaded by their findings. As qualitative methods receive greater acceptance this is becoming less of a problem.
Influencing decision-making
평가자가 교육적 의사결정에 영향을 미칠 수 있는 방법
Coles & Grant (1985) suggest the following ways in which evaluators can effect the educational decision-making process:
-
(1) involving the people concerned with the educational event at all stages of the evaluation;
-
(2) helping those who are likely to be associated with the change event to see more clearly for themselves the issues and problems together with putative solutions;
-
(3) educating people to accept the findings of the evaluation, possibly by extending their knowledge and understanding of the disciplines contributing towards an explanation of the findings;
-
(4) establishing appropriate communication channels linking the various groups of people involved with the educational event;
-
(5) providing experimental protection for any development, allocating sufficient resources, ensuring it has a realistic life expectancy before judgements are made upon it, monitoring its progress;
-
(6) appointing a coordinator for development, a so-called change agent;
-
(7) reinforcing natural change. Evaluation might seek out such innovations, strengthen them and publicize them further.
결론
Conclusions
평가자는 정치적 맥락을 알고 있어야 함.
Evaluators have to be aware of the political context in which many evaluations take place and of their own values and beliefs.
각 접근법의 한계를 알고 있어야 하며, 다양한 평가법을 선택해야 함.
evaluators should be aware of the limitations of individual evaluation approaches and be eclectic in their choice of methods. The ‘good enough’ rule is worth remembering.
결과를 리뷰할 때, 'that improvement, even when modest,is valuable.'라는 역사의 교훈을 기억해야 함.
On reviewing the results of his/her endeavour it is important for the educational evaluator to remember the lesson history teaches: that improvement, even when modest,is valuable.
Med Teach. 2006 May;28(3):210-24.
AMEE Education Guide no. 29: evaluating educational programmes.
Author information
- 1Department of General Practice and PrimaryCare, Community Based Sciences, University of Glasgow, UK. johngoldie@fsmail.net
Abstract
Comment in
- Ethics and evaluating educational programmes. [Med Teach. 2007]
- PMID:
- 16753718
- DOI:
- 10.1080/01421590500271282
- [PubMed - indexed for MEDLINE]
'Articles (Medical Education) > 교육과정 개발&평가' 카테고리의 다른 글
Competencies, Milestones, EPAs - 과거를 잊은 자는 반복하게 될 것이다 (Med Teach, 2016) (0) | 2016.11.17 |
---|---|
임상과학자프로그램 평가 - 커크패트릭을 넘어서. "효과가 있었나"와 "어떻게 효과가 생겼나" (Acad Med, 2011) (0) | 2016.11.08 |
보건전문직 교육에서 프로그램 평가에 대한 생각: '효과가 있었나?'를 넘어서 (Med Educ, 2013) (0) | 2016.10.26 |
의학교육에서의 과목평가 (Teaching and Teacher Education, 2007) (0) | 2016.10.26 |
프로그램 평가 모델과 관련 이론들 (AMEE Guide No.67) (Med Teach, 2012) (0) | 2016.10.25 |