Image: Mehau Kulyk/Photo Researchers, Inc.
It seems like every few months a new study points out the inefficacy of yet another wide-scale cancer screening. In 2009 the U.S. Preventive Services Task Force sug?gested that many women undergo mam?mograms later and less frequently than had been recommended before because there seems to be little, if any, extra benefit from annual tests. This same group recently issued an even more pointed statement about the prostate-specific antigen test for prostate cancer: it blights many lives but overall doesn?t save them.
More recently, researchers at the Dartmouth Institute for Health Policy and Clinical Practice announced that just because a mammogram (almost 40 million are taken every year in the U.S.) detects a cancer does not mean it saves a life. They found that of the estimated 138,000 breast cancers detected annually, the test did not help the vast majority of the 120,000 to 134,000 women afflicted. The cancers either were so slow-growing they did not pose a problem, or would have been treated successfully if later discovered clinically, or else were so aggressive that little could be done about them. Chest x-rays for lung cancer and Pap tests for cer?vical cancer have come under simi??lar criticism.
Individual cases dictate what tests and treatment are best, of course, but one factor underlying all these tests is a bit of numerical wisdom that, though well known to mathematicians, bears repeating: when one is looking for something relatively rare (not just cancer but even for, say, terrorists), a positive result is very often false. Either the ?detected? life-threatening cancer is not there, or it is of a sort that will not kill you.
Rather than looking at the numbers for the prevalence of the above cancers and at the sensitivity and specificity of each of the tests mentioned, consider for illustration cancer X, which, let us assume, afflicts 0.4 percent of the peo?ple in a given population (two out of 500)
at a certain time. Let us further assume that if you have this cancer, there is a 99.5 percent chance you will test positive. On the other hand, if you do not, we will assume a 1 percent chance you will test positive. We can plug these numbers into Bayes? theorem, an im?portant result from probability theory, and get some insight, but working directly through the arithmetic is both more illustrative and fun.
Con?sider that tests for this cancer are administered to one million people. Because the prevalence is two out of 500, approximately 4,000 (1,000,000 x 2/500) people will have it. By assump?tion, 99.5 percent of these 4,000 people will test positive. That is 3,980 (4,000 x 0.995) positive tests. But 996,000 (1,000,000 ? 4,000) of the people tested will be healthy. Yet by assumption, 1 percent of these 996,000 people will also test positive. That is, there will be about 9,960 (996,000 x 0.01) false positive tests. Thus, of the 13,940 positive tests (3,980 + 9,960), only 3,980/13,940, or 28.6 percent, will be true positives.
If the 9,960 healthy people are sub?jected to harmful treatments ranging from surgery to chemotherapy to radi?ation, the net benefit of the tests might very well be negative.
The numbers will vary with different cancers and tests, but this kind of trade-off will always arise in that nebulous region between psychology and mathe?matics. A life saved because of a test, though not that common, is a much more psychologically available outcome than the many substantial, yet relatively invisible, ill effects to which the test often leads.
Source: http://rss.sciam.com/click.phdo?i=2fa8f5777cb7042c654745f649a51603
patrice patrice tether lana peters lana peters jennifer nettles jennifer nettles
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.