The controversy over new breast cancer screening guidelines continues unabated today. There are already more than one thousand comments and letters to the editor addressing the issue at The New York Times alone.
Especially interesting are the statements from some of the physicians in both the articles and the comments. Many express a degree of confidence in the ability of mammograms to detect cancer well beyond what the literature would justify. How typical then is this discrepancy between medical opinion and what the numbers actually reveal? Quite.
Here’s a test given to a group of obstetricians from a study published in 2006 :
There’s a blood test available that can detect Down’s syndrome in the fetuses of pregnant women. If the baby has Down’s syndrome there’s a 90% chance the test will catch it. The test has a false positive rate of only 1%. Just 1 in 100 fetuses are likely to have Down’s syndrome. A pregnant woman walks into your office; she’s had the blood test and it’s positive for Down’s syndrome. What advice do you give her about whether or not her baby actually has Down’s syndrome?
Fifty seven percent of obstetricians got it wrong. Of those who got it wrong most got it spectacularly wrong, putting the odds of the baby having Down’s syndrome at anywhere from 80% to 100%. And those who were most wrong were the most confident that their diagnosis was correct.
In fact the odds are (52.4%) that the woman’s baby DOESN’T have Down’s syndrome. Think about what advice that woman would probably get. That’s a very real and chilling example of the inadvertent harm inflicted on women by doctors who put too much faith in even the most accurate diagnostic tests.
Want a handy way to figure out the odds in such cases? More on that tomorrow.