Programmatic Assessment를 위한 열두가지 팁(Medical Teacher, 2014)

12 Tips for programmatic assessment

C.P.M. VAN DER VLEUTEN1, L.W.T. SCHUWIRTH2, E.W. DRIESSEN1, M.J.B. GOVAERTS1 & S. HEENEMAN1

1Maastricht University, Maastricht, The Netherlands, 2Flinders University, Adelaide, Australia






Introduction

평가 프로그램을 구성할 때, 개별 평가는 '모든 평가의 합은 개별 평가의 단순합보다 크다'라는 생각을 가지고 선택되어야 한다. 따라서 개별 평가가 모두 완벽할 필요는 없다. 여러 평가법을 혼합한 결과가 이상적이어야 한다. 

From the notion that every individual assessment has severe limitations in any criterion of assessment quality (Van der Vleuten 1996), we proposed to optimise the assessment at the programme level (Van der Vleuten & Schuwirth 2005). In a programme of assessment, individual assessments are purposefully chosen in such a way that the whole is more than the sum of its parts. Not every individual assessment, therefore, needs to be perfect. The dependability and credibility of the overall decision relies on the combination of the emanating information and the rigour of the supporting organisational processes. Old methods and modern methods may be used, all depending on their function in the programme as a whole. The combination of methods should be optimal. After the introduction of assessment programmes we have published conceptual papers on it (Schuwirth & Van der Vleuten 2011, 2012) and a set of guidelines for the design of programmes of assessment (Dijkstra et al. 2012). More recently we proposed an integrated model for programmatic assessment that optimised both the learning function and the decision-making function in competency-based educational contexts (Van der Vleuten et al. 2012), using well-researched principles of assessment (Van der Vleuten et al. 2010). 


Dijkstra의 가이드라인이 보다 일반적이고 교육과정이 존재하지 않는 평가프로그램에도 적용가능하다고 한다면, 통합적 모델은 구성주의적 학습 프로그램 평가를 위한 것이다. PA(programmatic assessment) 에서의 의사결정은 개별 평가와 분리된다. 각각의 평가는 학습자에 대한 정보를 모으는 것이 목적이다. 이 때의 결정은 각각의 평가에서 충분한 정보가 수집되었을 때 내려진다. PA는 학습에 대한 종단적 관점을 포괄하며, 특정 학습성과와 관련해서 평가가 이뤄진다. 성장과 발달을 모니터하고 멘토링을 제공한다. 정보가 모두 모여졌을 때 서로 독립적인 평가자 그룹에 의해서 의사결정을 내린다. 이러한 PA모델은 교육에서 많이 인정되는 반면, 많은 사람들은 PA가 복잡하고 이론에 불과하다고 생각한다. 

Whereas the Dijkstra et al. guidelines are generic in nature and even apply to assessment programmes without a curriculum (e.g. certification programmes), the integrated model is specific to constructivist learning programmes. In programmatic assessment decisions are decoupled from individual assessment moments. These individual assessment moments primarily serve for gathering information on the learner. Decisions are only made when sufficient information is gathered across individual moments of assessment. Programmatic assessment also includes a longitudinal view of learning and assessment in relation to certain learning outcomes. Growth and development is monitored and mentored. Decision-making on aggregated information is done by an (independent) group of examiners. Although this model of programmatic assessment is well received in educational practice (Driessen et al. 2012; Bok et al. 2013), many find programmatic assessment complex and theoretical. Therefore, in this paper we will describe concrete tips to implement programmatic assessment.



평가를 위한 마스터플랜을 세우라

Tip 1 Develop a master plan for assessment

역량프레임워크 형태로 큰 틀에서의 구조를 선택해야 한다. 개별 평가에 대해서 모두 P/F 결정을 내리는 것이 아니라 다양한 평가가 이루어진 다음에 일관된 평가를 내려야 하기 때문이다. 기존의 형성평가와 종합평가라는 개념은 '저부담' 과'고부담' 의사결정으로 새롭게 정의된다. '고부담'결정은 많은 자료를 필요로 한다. 

Just like a modern curriculum is based on a master plan, programmatic assessment has to be based on such a master plan as well. Essential here is the choice for an overarching structure usually in the form of a competency framework. This is important since in programmatic assessment pass/fail decisions are not taken at the level of each individual assessment moment, but only after a coherent interpretation can be made across many assessment moments. An individual assessment can be considered as a single data point. The traditional dichotomy between formative and summative assessment is redefined as a continuum of stakes, ranging from low- to high-stakes decisions. The stakes of the decision and the richness of the information emanating from the data points are related, ensuring proportionality of the decisions: high-stake decisions require many data points. In order to meaningfully aggregate information across these data points an overarching structure is needed, such as a competency framework. Information from various data points can be combined to inform the progress on domains or roles in the framework. For example, information on communication from an objective structured Clinical examination (OSCE) may be aggregated with information on communication from several mini-clinical evaluation exercise (Mini-CEX) and a multisource-feedback tool.


따라서 마스터플랜은 전체 평가구조와 교육과정에서 각 데이터포인트가 어디에 위치하는지를 보여주는 지도가 되어야 한다. 실제 상황에서 이뤄지는 직접관찰과 같은 비표준화된 조건의 평가도 있고 이런 경우 전문가의 판단이 불가피하다. 학습 단계에 따라서 마스터플랜에는 표준화된 방법과 비표준화된 방법이 혼합된다. 교육과정에 대한 마스터플랜과 평가에 대한 마스터플랜은 이상적으로 하나의 마스터플랜이어야 한다.

The master plan should therefore also provide a mapping of data points to the overarching structure and to the curriculum. The choices for each method and its content are purposefully chosen with a clear educational justification for using this particular assessment in this part of the curriculum in this moment in time. Many competency frameworks emphasise complex skills (collaboration, professionalism, communication, etc.) that are essentially behavioural, and therefore require longitudinal development. They are assessed through direct observation in real-life settings, under unstandardised conditions, in which professional, expert judgement is imperative. Depending on the curriculum and the phase of study, the master plan will thus contain a variety of assessment contents, a mixture of standardised and non-standardised methods and the inclusion of modular as well as longitudinal assessment elements. For any choice, the contribution to the master plan and through this alignment with the curriculum and the intended learning processes is crucial. The master plan for the curriculum and the assessment is ideally one single master plan.


그 결과 비표준화된 평가의 주관성은 두 가지 측면에서 PA에 영향을 준다.

The resulting subjectivity from non-standardised assessment using professional judgement is something that can be dealt with in programmatic assessment in two ways. 

First, by sampling many contexts and assessors, because many subjective judgements provide a stable generalisation from the aggregated data (Van der Vleuten et al. 1991). 

Second, because subjectivity can be dealt through bias-reduction strategies showing due process in the way decisions are reached. We will revisit these latter strategies later in Tip 6. Subjectivity is not dealt with by removing professional judgement from the assessment process, for example, by over-structuring the assessment.



피드백을 장려하는 평가 규정을 개발하라

Tip 2 Develop examination regulations that promote feedback orientation

총괄평가식 접근에서 피드백은 대체로 잊혀지게 된다. 개별 평가와 credit을 연결시킬수록 학습자는 피드백을 받고 그것을 따르려 하기보다는 어떻게 시험에서 통과할지만 고민하게 된다. Credit point는 고부담 결정에만, 그리고 여러 데이터포인트를 기반으로 뒤따라야 한다. 

Individual data points are optimised for providing information and feedback to the learner about the quality of their learning and not for pass/fail decisions. Pass–fail decisions should not be made on the basis of individual data points – as is often the case in traditional regulations. Examination regulations traditionally connect credits to individual assessments; this should be prevented in programmatic assessment. Research has shown that feedback is ignored in assessment regimes with a summative orientation (Harrison et al. 2013). Because lining credits to individual assessments raises their stake, learners will primarily orientate themselves on passing the test instead of on feedback reception and follow-up (Bok et al. 2013). Credit points should be linked only to high stake decisions, based on many data points. In all communication and most certainly in examination regulations the low-stake nature of individual assessments should be given full reign.



정보 수집을 위한 견고한 시스템을 도입하라

Tip 3 Adopt a robust system for collecting information

e-portfolio는 다음과 같은 장점이 있다.

In programmatic assessment, information about the learner is essential and massive information is gathered over time. Being able to handle this information flexibly is vital. One way of collecting information is through the use of (electronic) portfolios. Here, portfolios have a dossier function allowing periodic analyses of the student’s competence development and learning goals. The (e-)portfolio should therefore serve three functions: 

  • (1) provide a repository of formal and informal assessment feedback and other learning results (i.e. assessment feedback, activity reports, learning outcome products, and reflective reports), 
  • (2) facilitate the administrative and logistical aspects of the assessment process (i.e. direct online loading of assessment and feedback forms via multiple platforms, regulation of who has access to which information and by connecting information pieces to the overarching framework), and 
  • (3) enable a quick overview of aggregated information (such as overall feedback reports across sources of information). User friendliness is vital. The (e-)portfolio should be easily accessible to whatever stakeholder who has access to it. Many e-portfolios are commercially available, but care should be taken to ensure that the structure and functionalities of these portfolios are sufficiently aligned with the requirements of the assessment programme.



모든 저위험평가가 학습자들에게 의미있는 피드백을 제공하도록 하라

Tip 4 Assure that every low-stakes assessment provides meaningful feedback for learning

풍부한 정보량은 PA의 핵심이다. 의미있는 피드백이란 다양한 형태를 띌 수 있다.

Information richness is the cornerstone of programmatic assessment. Without rich assessment information programmatic assessment will fail. Mostly, conventional feedback from assessments, that is, grades and pass/fail decisions, are poor information carriers (Shute 2008). Meaningful feedback may have many forms. 

  • 한가지는 시험이 종료된 이후에 정답과 오답에 대한 정보를 제공해주는 것이다.
    One is to give out the test material after test administration with information on the correct or incorrect responses. In standardised testing, score reports may be used that provide more detail on the performance (Harrison et al. 2013), for example, by giving online information on the blueprint categories of the assessment done, or on the skill domains (i.e. in an OSCE), or longitudinal overview for progress test results (Muijtjens et al. 2010). 
  • 구두로 제공되는 피드백도 있을 수 있다. 비표준화된 평가에서 rating scale을 활용한 양적 정보를 얻기도 하지만, 한계가 있고 복잡한 기술에 대한 피드백은 묘사적 정보를 통하는 것이 더 낫다.
    Sometimes verbal feedback in or after the assessment may be given (Hodder et al. 1989). In unstandardised assessment, quantitative information usually stems from the rating scales being used. This is useful, but it also has its limitations. Feedback for complex skills is enhanced by narrative information (Govaerts et al. 2007). 
  • 묘사적 정보는 표준화된 평가도 더 풍부하게 만들 수 있다.
    Narrative information may also enrich standardised assessment. For example, in one implementation of programmatic assessment narrative feedback is given to learners on weekly open-ended questions (Dannefer & Henson 2007). 
  • 수량화하기 어려운 것을 억지로 수량화시킨다면 평가대상의 의의를 상실하게 될 수도 있다. 또한 점수만 따기 위한 행동이나 학점 인플레이션을 유발할 수도 있다.
    Given the fact that putting a metric on things that are difficult to quantify may actually trivialise what is being assessed. Metrics such as grades often lead to unwanted side effects like grade hunting and grade inflation. 
  • 또한 평점은 의도치않게 피드백 과정을 '망칠' 수도 있다. 
    Grades may unintentionally “corrupt” the feedback process. Some argue we should replace scores with words (Govaerts & Van der Vleuten 2013), particularly in unstandardised situations where complex skills are being assessed such as in clinical workplaces. This is not a plea against scores. Scoring and metrics are fine particularly for standardised testing. This is a plea for a mindful use of metrics and words when they are appropriate to use in order to provide meaningful feedback.


효과적인 피드백을 얻기 위한 과정은 길고 힘들다. 자원을 절약하는 것에 관심을 가지는 것도 좋지만, 양질의 피드백을 제공하는데는 결국 시간과 노력이 필요하다. 두 가지를 명심해야 할 것이다.

Obtaining effective feedback from teachers, supervisors or peers can be a tedious process, because it is time and resource intensive. Considering resource-saving procedures is interesting (e.g. peer feedback or automatic online feedback systems), but ultimately providing good quality feedback will cost time and effort. Two issues should be kept in mind when thinking about the resources. 

  • 평가와 학습은 서로 얽혀 있다. 즉 가르치는 시간과 평가하는 시간이 명확하게 구분되지 않는다.
    In programmatic assessment, assessment and learning are completely intertwined (assessment as learning), so the time for teaching and assessment becomes rather blurred. 
  • 쓸모없는 피드백을 자주 하는것보다는 가끔이라도 양질의 피드백을 하는 것이 낫다.
    Second, more infrequent good feedback is better than frequent poor feedback. Feedback reception is highly dependent on the credibility of the feedback (Watling et al. 2012), so the “less-is-more” principle really applies to the process of feedback giving. High-quality feedback should be the prime purpose of any individual data point. If this fails within the implementation, programmatic assessment will fail.



학습자에게 멘토링을 제공하라

Tip 5 Provide mentoring to learners

피드백만으로는 부족할 수 있다. 피드백은 이상적으로는 성찰적 대화의 한 부분이어야 하며, 멘토링은 그러한 대화를 만들어나가는 효과적인 수단이다.

Feedback alone may not be sufficient for learners to be heeded well (Hattie & Timperley 2007). Research findings clearly indicate that feedback, reflection, and follow-up on feedback are essential for learning and expertise development (Ericsson 2004; Sargeant et al. 2009). Reflection for the mere sake of reflection is not well received by learners, but reflection as a basis for discussion is appreciated (Driessen et al. 2012). Feedback should ideally be part of a (reflective) dialogue, stimulating follow-up on feedback. Mentoring is an effective way to create such a dialogue and has been associated with good learning outcomes (Driessen & Overeem 2013).


PA에서 멘토링은 피드백 과정과 피드밸 활용을 지원하기 위한 목적을 갖는다. 멘토의 역할. 멘토의 역할은 학습자에서 최대치를 이끌어내는 것이다. 전통적 평가에서 최소 기준을 만족하는 것이 진급을 위해 충분한 것이었다면 PA에서는 개인의 수월성을 추구하는 것이 목적이며, 멘토는 이러한 수월성 달성을 위한 핵심인물이다. 

In programmatic assessment mentoring is used to support the feedback process and the feedback use. In a dialogue with an entrusted person, performance may be monitored, reflections shared and validated, remediation activities planned, and follow-up may be negotiated and monitored. This is the role of a mentor. The mentor is a regular staff member, preferably having some knowledge over the curriculum. Mentor and learner meet each other periodically. It is important that the mentor is able to create a safe and entrusted relationship. For that purpose the mentor should be protected in having a judgemental role in the decision-making process (Dannefer & Henson 2007). The mentor’s function is to get the best out of the learner. In conventional assessment programmes, adherence to minimum standards can suffice for promotion and graduation. In programmatic assessment individual excellence is the goal and the mentor is the key person to promote such excellence.



신뢰할 수 있는 의사결정을 내려라

Tip 6 Ensure trustworthy decision-making

풍부한 정보를 담고 있는 자료는 보통 양적, 질적 자료의 특성을 모두 가지고 있기 때문에, 이러한 정보를 종합해서 판단하는 것은 전문가적 판단력이 필요하다. '고부담'이라는 특성을 감안했을 때, 이러한 판단은 충분한 신뢰성을 갖추어야 하며 절차적 방법론이 이러한 신뢰성의 근거가 되어야 한다. 다음의 절차를 포함할 수 있다.
High-stakes decisions must be based on many data points of rich information, that is, resting on broad sampling across contexts, methods and assessors. Since this information rich material will be of both quantitative and qualitative nature, aggregation of information requires professional judgement. Given the high-stakes nature, such professional judgement must be credible or trustworthy. Procedural measures should be put in place that bring evidence to this trustworthiness. These procedural measures may include (Driessen et al. 2013):


  • An appointment of an assessment panel or committee responsible for decision-making (pass–fail–distinction or promotion decisions) having access to all the information, for example, embedded in the e-portfolio. Size and expertise of the committee will matter for its trustworthiness.
  • Prevention of conflicts of interest and ensuring independence of panel members from the learning process of individual learners.
  • The use of narrative standards or milestones.
  • The training of committee members on the interpretation of standards, for example, by using exceptional or unusual cases from the past for training purposes.
  • The organisation of deliberation proportional to the clarity of information. Most learners will require very little time; very few will need considerable deliberation. A chair should prepare efficient sessions.
  • The provision of justification for decisions with high impact, by providing a paper trail on committee deliberations and actions, that is, document very carefully.
  • The provision of mentor and learner input. The mentor knows the learner best. To eliminate bias in judgement and to protect the relationship with the learner, the mentor should not be responsible for final pass–fail decisions. Smart mentor input compromises can be arranged. For example, a mentor may sign for the authenticity of the e-portfolio. Another example is that the mentor may write a recommendation to the committee that may be annotated by the learner.
  • Provision of appeals procedures.

This list is not exhaustive, and it is helpful to think of any measure that would stand up in court, such as factors that provide due process in procedures and expertise of the professional judgement. These usually lead to robust decisions that have credibility and can be trusted.



중간 의사결정을 위한 평가를 조직하라

Tip 7 Organise intermediate decision-making assessments

모든 과정이 끝나고 이루어지는 '고부담 결정'은 학습자를 놀래키는 식으로 진행되어서는 안 된다. 중간평가의 결과를 제공하고 최종 결정에 대한 피드백을 줌으로써 최종평가의 신뢰성을 높일 수 있다. 중간평가는 보다 작은 수의 데이터포인트를 기반으로 내려진다. '저부담'과 '고부담'사이의 '중부담' 평가라 할 수 있으며, 진단적/치료적/예후적 역할을 할 수 있다. 이상적으로는 평가위원회가 모든 중간평가 결과를 제공하는 것이 좋으나 모든 학생에 대한 전체 위원회 평가를 하는 것은 지나치게 자원이 많이 소모될 것이다. 따라서 보다 자원을 효율적으로 사용할 수 있는 접근법을 고려할 필요가 있다.

High-stakes decisions at the end of the course, year, or programme should never be a surprise to the learner. Therefore, provision of intermediate assessments informing the learner and prior feedback on potential future decisions is in fact another procedural measure adding to the credibility of the final decision. Intermediate assessments are based on fewer data points than final decisions. Their stakes are in between low-stake and high-stake decisions. Intermediate assessments are diagnostic (how is the learner doing?), therapeutic (what should be done to improve further?), and prognostic (what might happen to the learner; if the current development continues to the point of the high-stake decision?). Ideally, an assessment committee provides all intermediate evaluations, but having a full committee assessing all students may well be a too resource-intensive process. Less costly compromises are to be considered, such as using subcommittees or only the chair of the committee to produce these evaluations, or having the full committee only looking at complex student cases and the mentors evaluating all other cases.



개별화된 교정교육을 장려하고 촉진하라.

Tip 8 Encourage and facilitate personalised remediation

교정교육은 재시험과는 다르다. 교정교육은 지속적인 성찰과정에서 드러나는 진단적 정보를 바탕으로 이뤄져야 하며, 언제나 개별화되어야 한다. 따라서 교육과정은 학습자가 교정교육을 계획하고 이수할 수 있도록 충분한 유연성이 있어야 한다. 비용이 많이 드는 교정교육 패키지를 개발할 필요는 없으며 학습자를 어떤 교정교육이 필요할지에 대한 결정에 참여시키고, 경험이 풍부한 멘토로부터 지원을 받도록 하면 된다. 이상적으로 교정교육은 충분한 지원과 방법을 학습자에게 제공하여 스스로의 책임이 되도록 해야 한다.

Remediation is essentially different from resits or supplemental examinations. Remediation is based on the diagnostic information emanating from the on-going reflective processes (i.e. from mentor meetings, from intermediate evaluations, and from the learner self) and is always personalised. Therefore, the curriculum must provide sufficient flexibility for the learner to plan and complete remediation. There is no need for developing (costly) remediation packages. Engage the learner in making decisions on what and how remediation should be carried out, supported by an experienced mentor. Ideally, remediation is made a responsibility of the learner who is provided with sufficient support and input to achieve this.



프로그램의 효과와 활용을 모니터하고 평가하라

Tip 9 Monitor and evaluate the learning effect of the programme and adapt

멘토는 중요한 이해관계자이다.

Just like a curriculum needs evaluation in a plan-do-act-cycle, so does an assessment programme. Assessment effects can be unexpected, side effects often occur, assessment activities, particularly very routine ones, often tend to trivialise and become irrelevant. Monitor, evaluate, and adapt the assessment programme systematically. All relevant stakeholders involved in the process of programmatic assessment provide a good source of information on the quality of the assessment programme. One very important stakeholder is the mentor. Through the mentor’s interaction with the learners, they will have an excellent view on the curriculum in action. This information could be systematically gathered and exchanged with other stakeholders responsible for the management of the curriculum and the assessment programme. Most schools will have a system for data-gathering on the quality of the educational programme. Mixed-method approaches combining quantitative and qualitative information are advised (Ruhe & Boudreau 2013). Similarly, learners should be able to experience the impact of the evaluations on actual changes in the programme (Frye & Hemmer 2012).



평가절차에서 나온 정보를 교육과정 평가에 활용하라

Tip 10 Use the assessment process information for curriculum evaluation

평가는 주로 세 가지 기능을 한다.

Assessment may serve three functions: 

  • to promote learning, 
  • to promote good decisions on whether learning outcomes are achieved, and 
  • to evaluate the curriculum. 


In programmatic assessment, the information richness is a perfect basis also for curriculum evaluation. The assessment data gathered, for example, in the e-portfolio, not only provides an X-ray of the competence development of the learners, but also on the quality of the learning environment.



이해관계자간 지속적 상호작용을 하라

Tip 11 Promote continuous interaction between the stakeholders

PA는 모든 사람들에게 영향을 미친다. 따라서 교육기관 전체에 대한 책임이 있다. 의사소통이 중요하며, 의사소통은 불완전성을 의미할 수도 있다. 평가위원과 멘토 사이에 벽이 있다면 객관적이고 독립적인 의사결정이 가능할지는 모르겠지만, 정보는 그만큼 덜 풍부해지는 것이다. 

As should be clear from the previous, programmatic assessment impacts at all levels: students, examiners, mentors, examination committees, assessment developers, and curriculum designers. Programmatic assessment is, therefore, the responsibility of the whole educational organisation. When implemented, frequent and on-going communication between the different stakeholder groups is essential in the process. Communication may regard imperfections in the operationalisation of standards or milestones, incidents, and interesting cases that could have consequences for improvement of the system. Such communication could eventually affect procedures and regulations and may support the calibration of future decisions. For example, a firewall between the assessment committee and mentors fosters objectivity and independency of the decision-making, but at the same time may also hamper information richness. Sometimes, however, decisions need more information about the learner and then continuous communication processes are indispensable. The information richness in programmatic assessment enables us to make the system as fair as possible.



도입을 위한 전략을 개발하라

Tip 12 Develop a strategy for implementation

PA는 학습에 대한 구성주의적 관점을 기반으로 한다. 평가시스템을 급격하게 변화시키는 것은 평가가 무뎌지거나 학생들의 'gaming'에 취약해질 것이라는 우려를 가지게 하나, 실제 활용한 사례를 살펴보면 그 반대이다. 그럼에도 고등교육의 많은 부분이 변화에 저항하는 특성이 있어서 변화전략이 필요하다. 

Programmatic assessment requires a culture change in thinking about assessment that is not easy to achieve in an existing educational practice. Traditional assessment is typically modular, with summative decisions and grades at the end of modules. When passed, the module is completed. When failed, repetition through resits or through repetition of the module is usually the remedy. This is all very appropriate in a mastery learning view on learning. However, modern education builds on constructivist learning theories, starting from notions that learners create their own knowledge and skills, in horizontally and/or vertically integrated programmes to guide and support competence. Programmatic assessment is better aligned to notions of constructivist learning and longitudinal competence development through its emphasis on feedback, use of feedback to optimise individual learning and remediation tailored to the needs of the individual student. This radical change often leads to fear that such assessment systems will be soft and vulnerable to gaming of students, whereas the implementation examples demonstrate the opposite effect (Bok et al. 2013). Nevertheless, for this culture change in assessment a change strategy is required, since many factors in higher education are resistant to change (Stephens & Graham 2010). A change strategy needs to be made at the macro-, meso- and micro levels.


  • At the macro level, national legal regulations and university regulations are often strict about assessment policies. Some universities prescribe grade systems to be standardised across all training programmes. These macro level limitations are not easy to influence, but it is important to know the “wriggle room” these policies leave for the desired change in a particular setting. Policy-makers and administrators need to become aware of why a different view on assessment is needed. They also need to be convinced on the robustness of the decision-making in an assessment programme. The qualitative ontology underlying the decision-making process in programmatic assessment is a challenging one in a positivist medical environment. Very important is to explain programmatic assessment in a language that is not jargonistic and which aligns with the stakeholder’s professional language. For clinicians, for example, analogies with diagnostic procedures in clinical health care often prove helpful.
  • At the meso level programmatic assessment may have consequences for the curriculum. Not only should the assessment be aligned with the overarching competency framework, but with the curriculum as well. Essential are the longitudinal lines in the curriculum requiring a careful balance of modular and longitudinal elements. Individual stakeholders and committees need to be involved as early as possible. Examination rules and regulations need to be constructed which are optimally transparent, defensible, but which respect the aggregated decision-making in programmatic assessment. The curriculum also needs to allow sufficient flexibility for remediation. Leaders of the innovation need to be appointed, who have credibility and authority.
  • Finally, at the micro level teachers and learners need to be involved in the change right from the start. Buy-in from teachers and learners is essential. To create buy-in the people involved should understand the nature of the change, but more importantly they should be allowed to see how the change also addresses their own concerns with the current system. Typically, teaching staff do have the feeling that something in the current assessment system is not right, or at least suboptimal, but they do not automatically make the connection with programmatic assessment as a way to solve these problems.


The development of programmatic assessment is a learning exercise for all and it is helpful to be frank about unexpected problems to arise during the first phases of the implementation; that is innate to innovation. So it is therefore good to structure this learning exercise as a collective effort, which may exceed traditional faculty development (De Rijdt et al. 2013). Although conventional faculty development is needed, involving staff and students in the whole design process supports the chance of success and the creation of ownership (Könings et al. 2005) and creates a community of practice promoting sustainable change (Steinert 2014).


PA로 변화하는 것은 전통적 교육과정이 PBL로 변화하는 과정에 비견될 수 있다. 

Changing towards programmatic assessment can be compared with changing traditional programmes to problem-based learning (PBL). Many PBL implementations have failed due to problems in the implementation (Dolmans et al. 2005). When changing to programmatic assessment, careful attention should be paid to implementation and the management of change at all strategic levels.




Conclusion

Programmatic assessment has a clear logic and is based on many assessment insights that have been shaped trough research and educational practice. Logic and feasibility, however, are inversely related in programmatic assessment. To introduce full-blown programmatic assessment in actual practice all stakeholders need to be convinced. This is not an easy task. Just like in PBL, partial implementations are possible with programmatic assessment (i.e. the increase in feedback and information in an assessment programme, mentoring). Just like in PBL, this will lead to partial success. We hope these tips will allow you to get as far as you can get.








 2014 Nov 20:1-6. [Epub ahead of print]

12 Tips for programmatic assessment.

Author information

  • 1Maastricht University , Maastricht , The Netherlands .

Abstract

Abstract Programmatic assessment is an integral approach to the design of an assessment program with the intent to optimise its learning function, its decision-making function and its curriculum quality-assurance function. Individual methods of assessment, purposefully chosen for their alignment with the curriculum outcomes and their information value for the learner, the teacher and the organisation, are seen as individual data points. The information value of these individual data points is maximised by giving feedback to the learner. There is a decoupling of assessmentmoment and decision moment. Intermediate and high-stakes decisions are based on multiple data points after a meaningful aggregation of information and supported by rigorous organisational procedures to ensure their dependability. Self-regulation of learning, through analysis of theassessment information and the attainment of the ensuing learning goals, is scaffolded by a mentoring system. Programmatic assessment-for-learning can be applied to any part of the training continuum, provided that the underlying learning conception is constructivist. This paper provides concrete recommendations for implementation of programmatic assessment.

PMID:

 

25410481

 

[PubMed - as supplied by publisher]


+ Recent posts