- News Home
6 March 2014 1:04 pm ,
Vol. 343 ,
Early in April, the first of a fleet of environmental monitoring satellites will lift off from Europe's spaceport in...
Since 2000, U.S. government health research agencies have spent almost $1 billion on an effort to churn out thousands...
Magdalena Koziol, a former postdoc at Yale University, was the victim of scientific sabotage. Now, she is suing the...
Antiretroviral drugs can protect people from becoming infected by HIV. But so-called pre-exposure prophylaxis, or PrEP...
Two studies show that eating a diet low in protein and high in carbohydrates is linked to a longer, healthier life, and...
Considered an icon of conservation science, researchers at World Wildlife Fund (WWF) headquarters in Washington, D.C.,...
The new atlas, which shows the distribution of important trace metals and other substances, is the first product of...
- 6 March 2014 1:04 pm , Vol. 343 , #6175
- About Us
Of Conflicts and Clinical Trials: Researchers Report New Results
10 September 2013 3:30 pm
CHICAGO, ILLINOIS—This week, the International Congress on Peer Review and Biomedical Publication here drew researchers from around the world to discuss ways to “improve the quality and credibility of scientific peer review and publication.” ScienceInsider attended and covered some of the more intriguing presentations. Today: a look at studies of problems in how researchers report the results of clinical trials and potential conflicts of interest.
Published Trial Results Often Differ From Those Initially Posted
Deborah Zarin, director of the database ClinicalTrials.gov at the National Library of Medicine, likes to say that her website is “a window into the sausage factory”—a view that we usually don’t get of how clinical trials work and how they don’t.
Six years ago, ClinicalTrials.gov was tasked by Congress to embark on a new experiment: In addition to trial registrations, many trial sponsors were required to deposit their results in the public database for anyone to access. At the congress, a group from Yale University School of Medicine explored how well the results posted on ClinicalTrials.gov match up with what’s published. What they found was not particularly encouraging.
Jessica Becker, a medical student, described how she and her Yale colleagues—Harlan Krumholz, Gal Ben-Josef, and Joseph Ross—identified 96 trials published between July 2010 and June 2011, all of them with a ClinicalTrials.gov identification number. They focused on studies that appeared in high-profile journals. Almost three-quarters of the trials analyzed were funded by industry.
All but one trial had at least one discrepancy in how trial details, results, or side effects were reported.
One big question was whether the same primary endpoints and secondary endpoints appeared in both the final publication and the ClinicalTrials.gov results database. A primary endpoint represents the main goal of a study and the question or questions it was designed to answer. Secondary endpoints are often added to squeeze as much information as possible out of what’s collected, but statistically they can be weaker because the trial wasn’t created with them in mind. Primary endpoints in 14 trials appeared only on ClinicalTrials.gov, while primary endpoints from 10 others were only in the publication. The results described were also different in some cases: For 21% of the primary endpoints, what appeared in the journal wasn’t exactly the outcome described on ClinicalTrials.gov, and in 6%, the Yale group suggested that this difference influenced how the results would be interpreted.
For secondary endpoints, the difference was even more dramatic: Of more than 2000 secondary endpoints listed across the trials, just 16% appeared the same way in both the public database and the published article along with the same results. Results for dozens of secondary endpoints were inconsistent. “Our findings raise concerns about the accuracy of information in both places, leading us to wonder which to believe,” Becker said.
The group hasn’t probed why this is happening: There could be innocent errors on ClinicalTrials.gov or typos in publications. Or authors may promote “more favorable stuff” in what’s printed, she speculated.
“There are many, many microdecisions” that come with writing up a publication, Zarin says. The uncomfortable results presented by Becker are “part of what motivates the desire” for anonymized information on individual patients, Zarin suggests—exposing that might be the only way to reconcile the discrepancies. Zarin also speculates that researchers might add positive secondary endpoints after the study is completed—a big no-no in the trials world—to give it a rosier hue, and thus they don’t appear on ClinicalTrials.gov when the study is first registered. Zarin is conducting her own analysis of the ClinicalTrials.gov results database, which now includes results from almost 10,000 trials. (150,000 trials are registered on the site.) She says she’s reaching similar outcomes as the Yale group.
One question that the Yale team didn’t explore was whether researchers had inputted their results on the site before submitting their paper—something that would allow journal editors or reviewers to play detective and see whether the document they have matches up with what’s in the database.
Potential Conflicts Still Going Unreported
Clinical trial authors still aren’t reporting their conflicts of interest, despite years of conversations and new policies encouraging them to do so. That’s the bottom line of a study presented here at the International Congress on Peer Review and Biomedical Publication, where Kristine Rasmussen from the Nordic Cochrane Centre in Copenhagen presented a new study tackling this question.
In most countries, it might be tough to determine whether authors who don’t disclose conflicts actually have them. But Denmark is unusual, because all Danish physicians are required by law to fill out forms if they collaborate with industry, and those forms are publicly available. (The United States is beginning to implement a similar rule as part of the Affordable Care Act.) The Danish system made it straightforward for Rasmussen and her Cochrane colleagues—Jeppe Schroll, Peter Gøtzsche, and Andreas Lundh—to compare disclosures of industry associations in published papers with the forms filed by doctors.
They looked at journals that follow recommendations from the International Committee of Medical Journal Editors (ICMJE) and searched for trials that had at least one Danish physician author who did not work at a company. They selected 100 recent studies. About half the doctors had some financial conflict of interest with a drug company, though not necessarily the company sponsoring the published research, they found.
Although most of the doctors disclosed relationships they had with the firm funding the published research, fewer than half shared relationships they had with industry competitors. And despite all the talk in recent years about conflicts, 16% who had a financial tie to a sponsor or drug manufacturer leading the study didn’t report it. One example cited by Rasmussen: a physician who was an advisory board member and speaker for AstraZeneca, maker of the drug being covered by the paper, who declared he or she had no conflicts.
“I was actually very disappointed” by this, says Vivienne Bachelet, editor-in-chief of the journal Medwave in Santiago, who was not involved in the study. In her country, she says, the “level of awareness is just nil” about conflicts of interest. Medical societies in particular get substantial funding from drug companies but almost no one—the societies themselves, drug regulators, or the individual doctors—see this as something that should be disclosed, Bachelet says. “If they’re not disclosing over there,” in Denmark, “what’s to be expected in Chile?”
Rasmussen noted that one issue might be vagueness in the ICMJE conflict of interest form: While it’s specific about the expansive nature of conflicts that might arise, such as travel paid by a company or fees for expert testimony, it suggests that authors disclose only those that are “relevant.” Says Rasmussen, “authors are left to decide” what falls into that category.