Our coverage of the retraction by the Journal of the American Heart Association of a study that claimed to link vaping and heart attacks caused a number of our blog readers to reach out to us to comment on the shocking lack of scientific credibility detailed in that story.
As we reported, the study was based on a mere 38 subjects and, worse, "the vast majority of vapers who reported having had a heart attack developed it on average 10 years BEFORE e-cigarette use initiation." It doesn't take a Ph.D. to understand that a heart attack you had before you started vaping wasn't caused by vaping!
The author of that particular study has a fairly sordid history of being accused of manipulating science in what amounts to an anti-vaping crusade. Was this just an isolated example of a flamboyant and overly zealous individual or is there a larger pattern at work? Were our readers correct in suggesting that scientific studies have become so corrupted by undue influence that they can no longer be trusted?
With our curiosity piqued, we decided to dig a little further and we quickly came across some shocking support for this theory.
The editors of two of the largest, most highly respected medical journals in the world have openly and publicly said that science can no longer be trusted.
“It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as editor of The New England Journal of Medicine," wrote Marcia Angell.
Angell would know. She edited The New England Journal of Medicine for nearly a quarter of a century. The New England Journal of Medicine is one of the most prestigious peer-reviewed medical journals in the world and has been published continuously since 1812. Yet the editor has claimed in black and white that it is no longer possible to believe most of the clinical research that it publishes!
Just as shocking were the words of Richard Horton, editor of the equally prestigious journal The Lancet, founded in 1823.
"The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness," said Horton.
Horton's laundry list of scientific afflictions avoided some of the more serious charges that could be leveled against the authors of unethical studies--words like "scientific fraud."
However, one study attempted to quantify just how many researchers might be cooking the scientific books. The study performed a meta analysis of dozens of surveys done over the years that specifically asked researchers about unethical practices. The results were quite shocking.
According to the answers supplied by the researchers themselves, 2% of them admitted that they had "fabricated, falsified or modified data or results at least once." In addition, 33.7% of them admitted to other questionable practices such as "plagiarism, duplicate publication, undisclosed changes in pre-research protocols or dubious ethical behavior."
When the same questions were asked about colleagues (that is, other researchers!), the numbers shot up even higher. 14.12% of researchers claimed to have knowledge of a colleague who had "fabricated, falsified or modified data or results at least once," and 72% had observed other questionable practices by fellow researchers.
You can read a letter to the Editor of the American Journal of Physiology, Endocrinology and Metabolism written by one researcher who publicly admitted to falsifying data. It's a shocking thing to read in black and white, a scientist directly admitting that the data included in a published study was fraudulently altered.
"I now wish to report that the data (reported in) that paper (was) falsified," wrote the researcher. "I take sole responsibility for the falsification, and I now publicly exonerate my coauthors."
Unfortunately, few researchers come clean in this transparent of a manner.
"There can be no doubt that discovered cases of research and publication misconduct represent a tip of an iceberg and many cases go unreported," noted another scientific paper on the subject.
In addition, the "duplicate publication" mentioned above is a cute little practice called "salami publishing," a fraudulent manipulation whereby a researcher will publish "many papers, with minor differences, drawn from the same study," thus magnifying the results and influence of the original study by passing it off as many studies. This taints the research pool and makes the conclusions drawn from the study seem as if they are more established than they actually are.
"People who have a financial stake in research outcomes should not publish in scientific journals without full and clear disclosure of conflicts of interest—especially when the results involve the safety or effectiveness of a company’s products," wrote The Union of Concerned Scientists (UCS), a 50 year old non-profit organization working for "rigorous, independent science to solve our planet's most pressing problems."
In fact, researchers and the companies backing them are supposed to declare those conflicts of interest but that often doesn't happen. The New England Journal of Medicine was forced to publicly apologize after discovering that nearly half the articles it had published on drug therapy over a three year period were written by researchers who failed to disclose a financial link to the product or manufacturer. Oops.
Further muddying the waters is disagreement over how to define a "conflict of interest." Is it a conflict if the researcher used to work for the company but doesn't any more? What about when the owner of the company whose products are being studied is a close friend or golf buddy of the researcher? Or when the researcher's wife is the one who owns the stock in the company?
These examples highlight the most obvious corrupting factor in scientific studies: money. The flow of money has far reaching effects that can skew results in ways so powerful, it is difficult to fully understand the scope of its influence. The influence of outright researcher fraud is likely very small by comparison.
As the saying goes, money talks and money thrown at lobbyists, doctors and scientific foundations has a loud voice. But there are also myriad ways to skew research, bury undesirable outcomes or gerrymander the results of studies, rearranging them to reach a desired result. Large companies with millions or even billions of dollars at stake use every trick in the playbook.
According to UCS, when hiding conflicts of interest doesn't work, companies will resort to "planting ghostwritten articles in legitimate scientific journals," making them appear to have been written by a legitimate scientific researcher when in fact they were crafted by corporate shills.
There's also no law that says a company has to publish the results of every study they do. This enables companies to simply bury studies that return unfavorable results while only publishing the ones that succeed in painting their products in a positive light.
This is known as "the file drawer problem," or "file drawer effect." In essence, there can be mountains of research showing a particular drug is dangerous or ineffective but no one will ever know about it because it has been deep sixed in the company's vault. Only the "good" studies will see the light of day.
It's also fairly easy to set up studies "with flawed methodologies biased toward predetermined results," notes USC. With some cunning design, a company can ensure a study will produce the result it is looking for. Careful selection of methodology or the use of an inadequately small sample size can easily rig studies to produce results that have no bearing on the truth or on the wider population.
When all else fails and genuine research finally catches up to a corporation, showing that its products are dangerous, poisonous or even just useless, companies reach for the final ace up their sleeve: doubt.
"...Rather than accepting the process of scientific discovery, business interests press to have every tiny bit of uncertainty explored before any policy decision can be made, demanding proof rather than precaution—in fact, they even manufacture uncertainty," wrote one commentator.
This tactic is directly taken from the playbook of Big Tobacco. In the immortal words of a tobacco executive, "Doubt is our product since it is the best means of competing with the 'body of fact' that exists in the minds of the general public. It is also the means of establishing a controversy."
Doubt is the brake pedal that slows the inevitable shut down of dangerous industry practices, allowing thousands, even millions, to be hurt in the meantime. This same tactic has been used to delay the removal of lead paint, BPA, Monsanto's Round Up and numerous other dangerous chemicals and drugs from use.
In light of these facts, it is easy to see how the research of Stanton Glantz has managed to drastically skew the debate on vaping, planting seeds of doubt and fear in the minds of the public and whipping up an anti-vaping frenzy on Capitol Hill.
Before the Journal of the American Heart Association issued a retraction of his study linking heart attacks to vaping, it was reported in mainstream media publications dozens of times and Glantz's research was even praised during the debate on the floor of Congress over HR 2339 even after the retraction was issued.
As USA Today noted, "The competition to prove or disprove the best way to quit smoking is also big business. It includes the fight for the $40 million in federal grants Glantz and UCSF were awarded."
The sad truth is that money, influence, the need for power and prestige, along with the overall rot of corruption so prevalent in this day and age have eaten through the last bastion of dispassionate authority: the scientific method. Even science, it seems, really can no longer be trusted.
For a deeper dive on this topic, we highly recommend The Disinformation Playbook authored by the Union of Concerned Scientists as well as Marcia Angell's book, Drug Companies & Doctors: A Story of Corruption, and The Triumph of Doubt: Dark Money and the Science of Deception by David Michaels.