Monday, December 21, 2009

More on mammograms at 40

Having recently turned 40, my CNM suggested a mammogram was in order, just in time for the national debate over whether and when to have one. For those like me, here's some historical context to help with the decision. I found the author's historical context very helpful in understanding the modern dialogue...But hey, that's why I love history. (I added the bolding for emphasis.)

November 20, 2009
Op-Ed Contributor
Addicted to Mammograms
By ROBERT ARONOWITZ
Philadelphia

THE United States Preventive Services Task Force’s recommendation this week that women begin regular breast cancer screening at age 50 rather than 40 is really nothing new. It’s almost identical to the position the group held in the 1990s.

Nor is the controversy that has flared since the announcement something new. It’s the same debate that’s gone on in medicine since 1971, when the very first large-scale, randomized trial of screening mammography found that it saved the lives only of women aged 50 or older. Despite the evidence, doctors continued to screen women in their 40s.

Again in 1977, after an official of the National Cancer Institute voiced concern that women in their 40s were getting too much radiation from unnecessary screening, the National Institutes of Health held a consensus conference on mammography, which concluded that most women should wait until they’re 50 to have regular screenings.

Why do we keep coming around to the same advice — but never comfortably follow it? The answer is far older than mammography itself. It dates to the late 19th century, when society was becoming increasingly disappointed, pessimistic and fearful over the lack of medical progress against cancer. Doctors had come to understand the germ theory of infectious disease and had witnessed the decline of epidemic illnesses like cholera. But their efforts against cancer had gone nowhere.

In the 1870s, a new view of the disease came to be developed. Cancer had been thought of as a constitutional disorder, present throughout the body. But some doctors now posited that it begins as a local growth and remains so for some time before spreading via the blood and lymph systems (what came to be understood as metastasis).

Even though this new consensus was more asserted than definitively proved by experimental evidence or clinical observation, it soon became dogma, and helped change the way doctors treated cancer. Until this time, cancer surgery had been performed only rarely and reluctantly. After all, why remove a tumor, in a painful and dangerous operation, when the entire body is diseased?

The new model gave doctors reason to take advantage of newly developing general anesthesia and antiseptic techniques to do more, and more extensive, cancer surgery. At the turn of the 20th century, William Halsted, a surgeon at Johns Hopkins, promoted a new approach against breast cancer: a technically complicated removal of the affected breast, the lymph nodes in the armpit and the muscles attached to the breast and chest wall.

Doctors widely embraced Halsted’s strategy. But they seem to have paid little attention to his clinical observations, which indicated that while the operation prevented local recurrence of breast tumors, it did not save lives. As Halsted himself became aware, breast cancer patients die of metastatic, not local, disease.

By 1913, the surgeons and gynecologists who started the American Society for the Control of Cancer (later the American Cancer Society) had begun an anti-cancer campaign that, among other things, advised women to see their doctors “without delay” if they had a breast lump. Their message promoted the idea that if cancer was detected early enough, surgery could cure it.

This claim, like the cancer theory it was built on, was based on intuition and wishful thinking and the desire to do something for patients, not on detailed evidence that patients were more likely to survive if their cancer was caught early and cut out. But it did create a culture of fear around breast cancer, and led the public to believe that tumors needed to be discovered at the earliest possible moment.

The “do not delay” campaign reached its heyday in the 1940s, when through lectures, newspaper articles, posters and public health films, doctors exhorted people to survey their bodies for cancer warning signs like breast lumps, irregular bleeding and persistent hoarseness. This campaign generated greater fear, which led to more demand for some means to gain a sense of control over cancer — typically satisfied by more surveillance and treatment.

During the 1930s and ’40s, more and more cancer was being diagnosed. The rising numbers led to even greater pressure to define early stages of cancer and find more cases as early as possible. Meanwhile, the apparent improved cancer survival rates — a result of more people receiving diagnoses, many for cancers that were not lethal — seemed to prove the effectiveness of the “do not delay” campaign, as well as radical cancer surgery.

By the 1950s, some skeptics were pointing out that despite all the apparent progress, mortality rates for breast cancer had hardly budged. And they continued not to budge; from 1950 to 1990, there were about 28 breast cancer deaths per 100,000 people. But calls for earlier diagnosis only increased, especially after screening mammography was introduced in the 1960s.

When the 1971 evidence came along that mammograms were of very limited benefit to women under 50, it ran up against the logic of the early-detection model and the entrenched cycles of fear and control. Detecting cancer in women under 50 should work, according to the model; indeed, younger women are the ones most likely to have the localized cancers that have “not yet” metastasized. And many doctors and women understandably objected, as they do today, to giving up the one means they had to exercise some control over cancer.

Critics of this week’s recommendations have poked holes in the Preventive Services Task Force’s data analysis, have warned against basing present practice guidelines on the older imaging technology used in the studies, and have called for still more studies to be done. They generally sidestep the question of whether the very small numbers of lives potentially saved by screening younger women outweigh the health, psychological and financial costs of overdiagnosis.

You need to screen 1,900 women in their 40s for 10 years in order to prevent one death from breast cancer, and in the process you will have generated more than 1,000 false-positive screens and all the overtreatment they entail. This doesn’t make sense. We could do more research and hold more consensus conferences. I suspect it would confirm the data we already have. But history suggests it would never be enough to convince many people that we are screening too much.

Robert Aronowitz, an internist and a professor of the history and sociology of science at the University of Pennsylvania, is the author of “Unnatural History: Breast Cancer and American Society.”

It's all Greek to me

What is the proper plural form of doula? Doules or doulas? A reader of this story in the Telegraph UK suggests that it should be doules. Regardless, US doules may be shocked to see that UK doules earn alot more per birth. (OK, the spelling felt awkward.)