Talking Risk: Vioxx, statistics and other complexities
Antidote's posts over the past two weeks about reporting on risk stirred up some great discussion among journalists and scientists about how to best serve readers. Before launching into a new set of statistical concepts, I wanted to pause and share some of the most useful items.
This whole jag about stats was started by a comment Dr. Catherine DeAngelis, the editor of JAMA, made that Vioxx should still be on the market.
Cardiobrief's Larry Husten wrote in response to Antidote's post:
DeAngelis of course is right: there are probably a million people who even now would be willing to make the calculated risk and take Vioxx. The problem is that not only were those 1 million people never told about the risk, there were another 19 million people taking the drug as a first line analgesic who were extremely unlikely to derive any unique benefit from the drug and who never should have been started on the drug in the first place. In a carefully selected patient population the risk to benefit equation yields very different results than in an indiscriminate population 20 times the size.
Husten's entire post is great, and it prompted this reply from Dr. Harry Greenspun, the chief medical officer for the Dell Perot Systems healthcare group, who wrote:
Companies always know more about the actual benefit/risk equation of their products than any outside agency... The bottom line is that Merck knew that continuing to sell Vioxx in a responsible manner would not be consistent either with the then undeniable science nor with their own marketing goals. In essence, by 2004 the jig was up.
Brian Reid, working at Bloomberg News, was the reporter who broke the story that Vioxx had been approved. He now is a director at healthcare communications agency WCG. Reid pointed out that the chief critic of Vioxx in 2004, Dr. Eric Topol, who was then at the Cleveland Clinic, has since said something similar to DeAngelis' comment. As quoted by New Yorker reporter Michael Specter, in his book Denialism, which was published last October, Topol said:
All Merck had to do was acknowledge the risk, and they fought that to the end. After fifteen months of haggling with the FDA they put a tiny label on the package that you would need a microscope to find. If they had done it properly and prominently, Vioxx would still be on the market. But doctors and patients would know that if they had heart issues they shouldn't take it.
Writer and CUNY professor Steven M. Gorelick highlighted Antidote's post about relative risk on his blog Media, Culture and Health. (Gorelick is probably too humble to toot his own horn, so, let me say for anyone stunned or titillated by the recent news that Stephen Ambrose may have faked some of his Eisenhower interviews, you should read his excellent commentary about Joseph Ellis and the desire to have lived a more noble life.) He commented:
It's hard to think of an area of health media that is reported more poorly and inaccurately than risk. In an era emphasizing "evidence-based" findings, risk is often reported (more often, misreported) in the language of statistics and probability, subjects often poorly understood even by specialists who should know better. The only long-term solution is that more people must become statistically literate and have a basic understanding of statistics, experimental design, and the old, familiar confusion between correlation and causation.
Gorelick also recommended two other sources worth checking out:
The CDC's National Center for Health Statistics
The New York State Department of Health's Basic Statistics
Matthew Herper at Forbes had the most pithy comment. After some back and forth about whether anyone could really know the rate of heart attacks among people who were taking Vioxx, Herper wrote: "Life's complicated. Good writing is not. Therefore good writing is not life."