PREDICTIVE VALIDITY OF THE UCAT, Medical Journal of Australia, (October 2011)

The conclusion of Wilkinson and colleagues (1) that UCAT scores and medical school performance are only weakly correlated for has been both criticised (2) as well as supported by others.

Barry Waters (3) criticises UCAT due to its inability to recognise the effect that failure in the test can have upon applicants. However, the University of Notre Dame (UND) uses the GAMSAT (Graduate Medical school Admission Test) instead of the UCAT and many of his criticisms of UCAT apply equally to GAMSAT.

For example, there are many students who would make good doctors but fail to get into medical school, and are scarred by the GAMSAT and the subjective 'interview' that UND and other graduate medical schools use for selecting medical students.

While UCAT is seen by Waters as testing 'arcane' matters, UCAT is simply testing students' generic skills. From my experience teaching students in three countries (Australia, New Zealand and Ireland) since the UCAT's inception, I have observed that the test has encouraged students to develop these skills in order to do well in the UCAT. A stark difference we have noticed was that until recently, Irish students were very quiet rote learners. Since the introduction of HPAT (identical to UCAT) three years ago, we have noticed a significant improvement in students' skills in critical thinking, logical/abstract reasoning and interpersonal understanding. A similar shift occurred in students' attributes in Australia about 15 years ago when UCAT was introduced. Who can say that this is a bad thing, especially when universities claim to develop these skills, which are termed 'graduate attributes'?

Two criticisms of Wilkinson's (1) study are warranted. First, UCAT is a test of ability (reasoning skills) whereas university grades are a test of effort (hard work), so one should not necessarily expect a correlation between the two.

Secondly the research is too narrowly focused. What is important is whether the selection system predicts success in later professional life, not whether it predicts academic success. The university's decision to consider dropping the test is analogous to scrapping university exams because there is weak or no correlation between university grades and later professional success in any field.

References:
1. Wilkinson D, Zhang J, Parker P. Predictive validity of the Undergraduate Medicine and Health Sciences Admission Test for medical students' academic performance. Med J Aust 2011; 194; 341-344.

2. Griffin, BN. Predictive validity of the Undergraduate Medicine and Health Sciences Admission Test for medical students' academic performance. Med J Aust 2011; 195; 102.

3. Waters, BNJ . Predictive validity of the Undergraduate Medicine and Health Sciences Admission Test for medical students' academic performance. Med J Aust 2011; 195; 267-268.