No 25 - Does your medical school make you a better doctor? And what is a good doctor anyway?
#MedEd #MedTwitter #MedStudentTwitter
TLDR
Defining what is a good doctor is very difficult because it is multi-factorial and includes a long list of skills, knowledge, character and performance outcomes. Simplifying this into a reliable measure is difficult and no one has come up with a good way of doing this year. Therefore, it is also difficult to measure if a medical school is any good, because if you don’t know what a good doctor is how can you tell if the school has produced better graduates?
I propose that a simple solution would be to ask all patients and colleagues: “would you trust this clinician to treat your family?”
This week, I have enjoyed some serendipity. A podcast I listened to and our Gen Paeds EBM teaching session were on similar topics and one of my favourite topics: medical education!
Almost from day 1 of medical school back in 2009, I have been fascinated by two questions: 1) how do you know if someone is a good doctor?; and 2) how do you know if their medical training made a difference?
Freakonomics, M.D. (@DrBapuPod) Tweeted:
Does where your doctor trained affect the kind of care they provide?
This week, @AnupamBJena talks to @Wharton/@PennCHIBE economist David Asch to find out whether prestigious residency programs actually produce better doctors. https://t.co/Ranwn6msau


McManus, I.C., Harborne, A.C., Horsfall, H.L. et al. Exploring UK medical school differences: the MedDifs study of selection, teaching, student and F1 perceptions, postgraduate outcomes and fitness to practise. BMC Med 18, 136 (2020). https://doi.org/10.1186/s12916-020-01572-3
The brief summary of these two resources is that there probably is a statistically significant impact of different medical schools and training programmes on the quality of a doctor at the end of their training. However, this is really difficult to measure and we don’t really know if the previous statement is actually true.
So, why have these two questions fascinated me for so long?
They are both really difficult to answer. The first question first, what makes a good doctor?
In an ideal world you want a doctor who is a brilliant communicator, speaks your language fluently, has an encyclopaedic knowledge of diseases and treatments and can tell you exactly what is causing your symptoms 100% of the time and give you a treatment that will work 100% of the time. Likewise, for a surgeon you want someone with all of the above and "an eagle’s eye, a lady’s hands and a lion’s heart”- Sir Astley Cooper.
Surely, I have just answered the question? But alas, no. The above is the ideal but it is not really possible for a number of reasons: 1) too many languages to learn; 2) how do you rate someone’s communication skills? And how do you say if one person is always better than another or is it context dependent?; 3) is it possible to know the important information about every single human disease? I’d argue it is not; 4) likewise, is it possible to know every important fact about every medication? Maybe if you are a memory savant but not a normal human doctor; and lastly 5) is it possible to give the right diagnosis and treatment every single time? No.
Medicine is a profession that works with human biology, human psychology and statistics. It is not like studying Newtonian physics where rules apply and everything can be predicted for eternity. It is far more like trying to predict the weather, where chaos theory reigns!
So, even the best doctors are more like educated gamblers, risking their reputation on certain diagnosis and treatment and trying to maintain their poker face while they let time pass and the body mostly heals itself. So, rather than looking for perfection the best doctors are those who are good all rounders who are statistically more likely to be right more of the time, and hopefully spot the really dangerous red flags and save their patients from dying.
The other major issue with all of the above, is that it is incredibly context dependent and subjective. We do not collect any good objective data on clinical performance, except death.
Death is the ultimate objective marker of a clinician’s ability but it is a very blunt tool. Even for surgeons doing risky operations, it is not an exact marker of their skill because in modern medicine there is a whole team of people involved in the care of any patient. That team includes the ED staff, the ward staff, the theatre staff, the surgeon, the anaesthetist, the recovery staff, the ITU staff and so on and so on. If any one of those people makes an error or there is a flaw in the system as a whole then a patient death may be attributed as a black mark on that surgeons case log, but everything the surgeon did may have been done perfectly.
If you take a completely different speciality like radiology or pathology then you can be far more objective about their diagnostic skills, however, you can still not expect 100% because of the uncertainty inherent within the medium they work with. After all even high resolution CT scans are just black and white dots on a screen and even a highly trained eye or AI software can mistake 1 grey patch as another slightly different grey patch. At least with radiology and pathology it is possible to revisit old scans or old samples and match them against a patient’s notes and see if you were objectively correct about your diagnosis.
General practice is totally different again. The vast majority of patients are seen, given a referral, reassurance or a treatment and then never followed up (by the same clinician) to see if they got better. For most conditions, treated by a GP there will be no objective test of what the diagnosis was. There will be no scan, no blood test or any pathology report for the majority of what you treat. Therefore, if the patient goes away and doesn’t come back we assume that either we were right in our diagnosis and treatment, or that we were close and it doesn’t really matter because the body cured itself. In GP, you only ever really get objective feedback when something has gone really wrong and you get a complaint or a letter from secondary care or the coroner!
I have a simple solution to this issue and a complex solution.
The complex solution would be a better electronic patient healthcare record, using big data and patient surveys to follow up every single consultation. This surveys would collect baseline data and then see if patients symptoms had resolved a few weeks later and this would be able to give the whole medical profession more objective data on our performance. If you don’t measure it you cant improve it, is an old maxim. But Garbage in = Garbage out is equally true and the era of big data is not quite ready for every single patient interaction to be measured and tracked.
The simpler solution is to rate clinicians by a summary of this question: “would you trust this clinician to treat you and your family?” 0 – absolutely not to 10 – complete faith in their skills.
In the medical community this question gets asked a lot and it is soon very obvious who would be ranked highly within a group and who wouldn’t. This question summarises the clinicians comms-skills, technical ability, experience, reputation etc etc because it all gets brought together by your gut feeling of doing you trust them. Obviously, those with a confident manner and a good reputation may be complete blaggers but they will eventually be found out by this system.
The other benefit of this system is that it would be really quick and easy to use and to rank clinicians. The GMC could quite easily but a link next to everyone’s name on the register. This link is then sent to all colleagues a few times a day and all patients after every consultation and the score would be easily and automatically updated.
It would then be very easy for those patients who want to be choosy to find the “most trusted” GP or surgeon or radiologist in their area and ask to see them. And just like Uber, if people then start to have a bad experience, the clinicians rating will drop and people will go else where. This competition would drive up standards and allow patients more objective choice in who they see.
Before you think I have gone mad and am just hell bent of slating hard working clinicians, what I expect you would find is a right skewed bell curve, with 90% of clinicians with a score around 7/10, and then a few people getting up to the 8’s and 9’s, and a tail sloping down to the 0’s and 1’s.
The other benefit of this system, except for better patient choice would be that the GMC could easily focus on those who might need help the most at the bottom of the bell curve and it would be easy to find people at the top of the curve who could act as mentors and teachers, to share their good practice.
A final few thoughts on whether it matters where you go to medical school and where you do your speciality training programme.
To see whether an intervention makes a difference you first need baseline data and then ideally, you need randomisation. So, in the UK all medical school graduates will have GCSE and A-levels and so we could use these as baseline data. The medical school application process claims to be validated and consistent but as far as I can tell it’s a roughly randomisation process that might skew a few people to certain medical schools. To for example, those with the highest grades, the most confident manner, the best spoken English and perhaps the most academic personalities might end up at Oxford, Cambridge, Imperial, Bristol, Edinburgh and Durham. While, those with almost as good grades but a more friendly, chatty, outgoing personality might end up at Keele, Sheffield, Birmingham etc where they are more likely to become GPs.
Either way, it should be quite easy to do a cohort study using baseline GCSE and A level data, and then compare grades at medical school, with grades in post-graduate exams and how many attempts at the exams etc. This would then be able to tell you roughly if the individual medical schools provided any “added value”. As far as I know, no studies have done this specifically and therefore, no medical school in the UK can truly claim to be the “best medical school for teaching medicine and providing the most added value”, instead we often have claims of this is the “best medical school” because the “brightest students” come here and do well in post grad exams (because they are often people who are naturally good at taking tests).
The other flaw with this idea that we can objectively measure the “added value” of a medical school and rank them, comes back to question 1 of does a doctor who goes to a “prestigious” medical school and a “prestigious” training programme and passes their post-grad exams first time with top scores actually make them a “better doctor” or just someone who is good at memorising medical facts?
Would it be fair to ask people if they trusted A-level students with treating your family and then see if their “Trust score” increases throughout their training? I don’t know but it would be interesting to find out.
As always, thanks for getting to the end of this article. I’d love to know what you think, so please leave your comments below. And if you have liked what you have read then why not share it with your friends or sign up to receive the future articles.