Classical test theory views assessor variance as random,45 suggesting that examiner‐cohort results might disappear with larger sampling or reduced error variance. 66 decided by meta‐analysis of other OSCEs,44 suggesting that our findings have ecological validity. Nonetheless, this emphasises exam significance of examiner schooling, benchmarking and clear marking standards to ensure applicable reliability of OSCEs, particularly when used for summative evaluation. Equally, as reliability is always stimulated more by station specificity than by examiner variability, increasing examination number of stations is probably going to supply larger raises in reliability than examiner‐concentrated approaches. 8Conversely, many clinical schools run OSCEs across distinctive geographically dispersed sites,18, 46 during which exam examiners at each site are drawn from clinicians who coaching in the neighborhood and who rarely have interaction with clinicians from other sites. In this quite common example it is cheap to indicate that examiner cohorts may be systematically alternative in their observe norms and beliefs, examination cohorts of trainees to whom they are uncovered, their strong point mixes and their level of specialisation.