Are Medical Conferences Useful? And For Whom?
John Ioannidis, MD, of Stanford, whom the CommonHealth/WBUR bloggers referred to as the "renowned mythbuster of medicine," asks in a JAMA viewpoint piece, "Are Medical Conferences Useful? And for Whom?" (unfortunately, subscription required for full text access).
The CommonHealth blog explains:
After many years of questioning assumptions and seeking harder data on everything from surgery customs to drug studies, Dr. Ioannidis is now taking on a major cultural institution of medicine: The conference. (Some might call it "the boondoggle, junket, fuel-wasting, resume-padding, often-not-peer-reviewed conference.") This latest target is particularly striking given that the Atlantic piece says that "His work has been widely accepted by the medical community; it has been published in the field's top journals, where it is heavily cited; and he is a big draw at conferences."
Excerpts of the Ioannidis JAMA piece:
An estimate of more than 100 000 medical meetings per year may not be unrealistic, when local meetings are also counted. The cumulative cost of these events worldwide is not possible to fathom.
Do medical conferences serve any purpose? In theory, these meetings aim to disseminate and advance research, train, educate, and set evidence-based policy. Although these are worthy goals, there is virtually no evidence supporting the utility of most conferences. Conversely, some accumulating evidence suggests that medical congresses may serve a specific system of questionable values that may be harmful to medicine and health care.
The availability of a plethora of conferences promotes a mode of scientific citizenship in which a bulk production of abstracts, with no or superficial peer review, leads to mediocre curriculum vita building. Even though most research conferences have adopted peer-review processes, the ability to judge an abstract of 150 to 400 words is limited and the process is more of sentimental value.
Moreover, many abstracts reported at the medical meetings are never published as full-text articles even though abstract presentations can nevertheless communicate to wide audiences premature and sometimes inaccurate results. It has long been documented that several findings change when research reports undergo more extensive peer review and are published as completed articles.* Late-breaker sessions in particular have become extremely attractive prominent venues within medical conferences because seemingly they represent the most notable latest research news. However, it is unclear why these data cannot be released immediately when they are ready and it is unclear why attending a meeting far from home is necessary to hear them. A virtual online late-breaker portal could be established for the timely dissemination of important findings.
Power and influence appear plentiful in many of these meetings. Not surprisingly, the drug, device, biotechnology, and health care–related industries make full use of such opportunities to engage thousands of practicing physicians. Lush exhibitions and infiltration of the scientific program through satellite meetings or even core sessions are common avenues of engagement. Although many meetings require all speakers to disclose all potential conflicts, the majority of speakers often have numerous conflicts, as is also demonstrated in empirical evaluations of similar groups of experts named on authorship lists of influential professional society guidelines."
Ioannidis doesn't discard the entire notion of conferences. In fact, he projects what "repurposed" conferences might be like:
"Repurposed conferences could be designed to be entirely committed to academic detailing (ed. note: drug company "educational" outreach to physicians). All their exhibitions and satellite symposia would deal with how to prescribe specific interventions appropriately and how to favor interventions that are inexpensive, well tested, and safe. Such repurposed conferences could also focus on how to use fewer tests and fewer interventions or even no tests and no interventions, when they are not clearly needed."
A Google search suggests that no news organization other than the Boston-based blog cited above chose to write about Ioannidis' piece.
Yet, in our HealthNewsReview.org daily reviews of news stories, we see stories every week that are in a rush to publish whatever is presented at such conferences. Examples:
- Breast cancer vaccine research presented at the American Association for Cancer Research annual meeting.
- Mouse research on prostate cancer scans – presented at the same meeting – reported by the same news organization
- Story based on a presentation from a "Late Breaking Research Session" (one of Ioannidis' other themes in the article above) at the annual meeting of the American Academy of Dermatology. The presentation was one of 15 conducted within a 2 hour time period (an average of 8 minutes per presentation). So how/why was this one selected for news coverage? We are especially bewildered since a poster at the same meeting (Poster 5300) provided information on the use of a competing microwave device for hyperhidrosis. The poster presented 6 month data for 27/31 enrolled subjects. Why then report on a 2 month study in 14 subjects presented presumably in 8 minutes?
Weak story on a weak, tiny, short-term manufacturer-funded study suggesting weight loss from a supplement containing unroasted coffee bean extract.The primary source material was a 150-word abstract that hasn't been peer-reviewed and which had not yet even been presented at the national meeting of the American Chemical Society.
Each of these was reported just in the past 2 weeks. We see it all the time.
Journalists who cover medical conferences should read and learn from the Ioannidis work.