Sunday, April 30, 2006

Medical Prophecy

If you do a web search for information on some life threatening illness or risky operation, you are likely to come up with information in the form of survival rates: Of people who have this operation, X% are still alive five years later. Assuming I correctly understand where it comes from, such information is wrong—in a predictable direction.

My assumption is that such information is generated by looking at patients who had the operation at least five years ago—say between five and ten years—and seeing how many of them were still alive five years later. That gives a survival rate, but it is the survival rate as of five to ten years ago. Medical technology has been improving in recent decades, which means that the survival rate for someone who has the operation today is probably better, perhaps significantly better, than for someone who had it five years ago.

One way of getting a better estimate would be extrapolation. Calculate the survival rate for patients who had the operation five years ago, six years ago, seven years ago .... fifteen years ago. Assume that whatever rate of improvement you observe continued; fit the data with a straight line or simple curve. See what extrapolating that line tells you about the risk if you have the operation tomorrow.

Can someone better informed than I am about medical statistics say whether I am correct about how they are calculated, or whether something like what I have just described is current practice?

10 comments:

Glen Whitman said...

X-year survival rates can be biased in the other direction by improvements in detection. If a medical condition is detected earlier, then patients will tend to live a longer time beyond the first detection even if the treatment has not improved.

Glen Whitman said...
This comment has been removed by a blog administrator.
Glen Whitman said...

Right, I was looking at it from the standpoint of evaluating the quality of treatment. As detection improves, we shift toward a population of people whose condition is (on average) not as advanced, and who are therefore likely to live longer even if we hold treatment constant.

But the effects could certainly be difficult to parse. We might have a given treatment that has not improved at all, but that does (and always did) perform better with early detection. In that case, better detection techniques interact with the same treatment options to produce a higher survival rate. To identify actual improvements in the treatment itself, you'd have to control for the stage of the illness at the time of diagnosis.

"Since detection has improved, I am likely to be less advanced in disease X than people in past studies, *and* I'm likely to be getting better treatment." That's if you're looking for expected survival time conditional only on having been diagnosed. But if you've been diagnosed, you probably already know the stage of your illness, too, in which case you'd want your assessment to be conditional on the stage as well.

Gil said...

I asked this question of a doctor friend of mine and he said I could post his response:

When a medical study says:
"Of people who have this operation, X% are still alive five years later."

This means:
At the time the last study was COMPLETED (which may have been one month ago or ten years ago), the five-year survival rate was X%.

This survival rate is useful and easy to compute.

Computing a future estimated survival rate is much more difficult and problematic.

Should we use a linear extrapolation of previous data? Curvilinear? Polynomial? Exponential?

Should we extrapolate from the previous 5 years? 10 years? As far back as the data permits? 100 years?

If a new surgical treatment became available 15 years ago, should we only extrapolate from the past 15 years? What if a new surgical treatment became available 15 years ago, and a new chemotherapy agent became available 10 years ago, and scientists debate on which was a more important advance?

What if a new diagnostic test became available 10 years ago? Perhaps it helped, finding the disease at an earlier, more treatable stage. Perhaps the test only identifies the disease earlier, but not earlier enough to be treated better. In that case, the survival-from-time-of- diagnosis is increased, but the course of the disease process is not affected. This is called "lead time bias".

So predicting precise future survival rates based on past trends is difficult.

Anyway, do you really care if the 5 year survival rate for your disease is 55% or 75%? Maybe, if you are a health insurance company and need to compute future costs precisely. If you are a patient, does the difference between the 55% prediction and the 75% prediction change any decisions you make?

Much more important is the question:
Does treatment A give better survival rates than treatment B?

To answer that question, you would like to see a prospective randomized trial of treatment A versus treatment B.

Prospective: The two treatments are given to two groups of patients at the same time. This avoids the problems of comparing the survival rate from a new treatment with historical controls from years ago, when medical technology was less advanced.

Randomized: Avoids selection bias, so that sicker patients don't get put into one group preferentially.


References:

http://seer.cancer.gov/csr/1975_2002/results_merged/topic_survival_by_year_dx.pdf
Data on trends in survival rates for different kinds of cancer. The survival rates change in an irregular variable way.

http://www.acponline.org/journals/ecp/primers/marapr99.htm
Information on lead time bias and other types of bias in diagnostic tests.

Gil said...

Hmm, those reference links didn't make it.

Here are tinyurl versions:

http://tinyurl.com/o9ytb


http://tinyurl.com/reu7s

David Friedman said...
This comment has been removed by a blog administrator.
David Friedman said...

Gil offers some actual data on the question. Looking at his first link, the pattern seems clear. Survival rates rise pretty steadily over time, with some random noise. For 1 year survival rates the increase is about .4%/year, for 5 year about .75%.

So someone diagnosed today will be told his survival rate is 64.8%, when extrapolation of the current pattern implies it is about 71.5%. Not an enormous difference, but certainly a significant one.

That's for "all sites--invasive" not for any particular form of cancer.

Unknown said...

Stephen Jay Gould wrote an excellent essay in his Natural History Column, called 'The Median isnt the Message' which also pointed out that skewness must be given priority over any central tendancy, in order to get a good idea of survival rates.

Unknown said...
This comment has been removed by a blog administrator.
Anonymous said...

Hi.

From your physics background you should have prposed something more on the order of a hysterises curve with some sort of limitting effectiveness typically less than 100%. Your simple curve proposal would ultimately predict immortality for just about any medical procedure based upon the placebo effect alone.

Solveig