Gersten Jonker

A better transition with the ACTY   121 5 APPENDIX 1. DETAILED DESCRIPTION OF THE MULTIMODAL ACUTE CARE ASSESSMENT Participants took part in a multimodal acute care assessment in which they were evaluated at junior doctor-level, i.e. the expected level of a recent graduate with 6 months clinical experience. Specifically for ACTY and this study, expert clinicians of the five specialties participating in ACTY collaboratively designed a multimodal assessment inspired by the levels of Miller’s pyramid: Knows, Knows how, Shows how, and Does [1]. We chose to assess these components of competence as a proxy for assessing competence. Because we required highly comparable assessments of all participants under controlled circumstances, we were not able to use clinical workplace assessments to evaluate participants at the Does level. Instead, we used high fidelity simulations in contextually rich scenarios of acutely compromised patients to mimic clinical practice. Simulations can be considered to evaluate performance high in the Shows how level, approaching the Does level [2]. The clinical learning objectives of the Acute Care program, formulated as Entrustable Professional Activities (EPAs) and describing authentic acute care tasks that junior doctors face, served as assessment blueprint. The EPAs listed the knowledge, skills, attitudes and competencies required for execution of the task [3]. Each listed aspect formed a row in the blueprint matrix and linked to one or more assessment elements, forming the columns of the matrix. We assessed Knows level with a knowledge test, Knows how level with case-base discussions (CBDs), Shows how with skills stations in Objective Structured Clinical Examination format, and Higher shows how level with high-fidelity simulations. Assessment modes Knowledge test This was a 40 minute paper-based written test, containing around 40 closed and six open format questions, targeting factual or applied knowledge or higher order thinking. Items requiring application or higher order thinking yielded more points than factual items. We created new questions or drew upon ACTY faculty databases, and one of the authors (GJ) and an assessment expert evaluated and amended items as needed. Three versions of parallel content existed.

RkJQdWJsaXNoZXIy ODAyMDc0