Over the years I have received many emails about scientific research and health care. Three types of these emails interest me in particular.
One type I’ll call the “evil scientific conspiracy” group. For example, the email might be angry that I have listed the site Quackwatch.com as part of my list of links. The writer might feel that this demonstrates my close-mindedness about alternative medicine. What such a link in fact demonstrates is my own commitment to evidence-based health practice. Sure, we don’t know everything yet. But we can often take pretty good guesses about what works and what doesn’t. And there are lots of people out there who want your money, so it’s important to be careful. A characteristic statement by someone from this group might be, “Well, you can use statistics to prove anything.” More accurately, you can use statistics to attempt to prove anything, but whether it actually works on an informed reader is another story. There is no scientific conspiracy any more than there is a feminist conspiracy. If there really were a feminist conspiracy I’d be driving a much nicer car.
The second type of email takes the opposite approach. These folks feel that I am not sufficiently rigorous in my scientific approach and chastise me for my woo-woo health options that include chiropractic care. And yet, although I am often willing to try new things, I always do my homework first. Before I walked into a chiro clinic, I made sure that there was clinical evidence that it might be worthwhile. That being said, if you suffer from chronic pain, once you run out of standard medical options you will definitely begin considering crystals and incantations. Trust me.
The third type I’ll call the pseudoscience whackjobs. These are rare yet amusing and generally concern the enthusiastic endorsement of some type of bizarre physiological fallacy. Also they have a tortilla chip that kind of looks like a religious icon if you squint a bit and hold it sideways. They say things like, “People laughed at Einstein!” Yes, well, people also laughed at Tom Green and the only discovery he made was that high school age males will be amused if you spray milk through your nose. I used to rebut such things with a gentle reminder of basic anatomical facts and the laws of physics, but these days I usually just hit “delete”.
How do we know what the “truth” is? When it comes to health and fitness, people are often confused by what seems like conflicting research reported in the media. It seems like everything conflicts. Eat this. Don’t eat this. This will give you cancer. This will cure cancer. And so forth.
Academic pressures
And we can’t even count on the honesty of scientific research. Researchers are under immense pressure to put out publications quickly and extensively. Their donors, which often include corporations, are looking for results. In a study conducted by the journal Contemporary Clinical Trials (26 no 2 [April 2005]: 244-251), 17% of authors personally know about a case of fabrication or misrepresentation in the last 10 years from a source other than published accounts of research misconduct.
A university legend from an institution that shall remain nameless concerns a certain physics professor who was known for his prodigious research output. Nobody could figure out how this guy was able to put out so many articles. One day his “secret” was discovered. The professor would hang out in the department mailroom. Every time he saw a big envelope being placed in the outgoing mail, he’d steal it. Most of the time it was a manuscript on its way to a journal. He’d change the name on the manuscript and send it to an obscure no-name journal for publication. Eventually one of the authors who was wondering why he or she had never heard back from the journal saw their article elsewhere, and the scam was over. And you thought that academia was boring. Now you find out it’s a den of intrigue and espionage! I hear that some of them are even having sex occasionally. Tsk.
But even among honest researchers, experimentation often yields no “results” in the form of breakthroughs or eureka moments. Quite often the experiment shows no effect or no difference. Now, this in itself is an important finding. It’s just not as sexy. Compare “Men are from Mars, women are from Venus” to “men are from Earth, women are from Earth, so get over it”. Although the second one is the one most often found, the first one is much more catchy, isn’t it? And it’s the catchy stuff that grabs the headlines. Mainstream scientific press does not want to see careful conclusions or cautious advice. They do not want to know that there is nothing to see here so please drive along. They want the secret, man! Will this pill give me total buffitude? Enquiring minds want to know!
‘Lives at risk’ from research fraud
BBC News UK Thursday, June 4, 1998
Doctors have warned that medical researchers who fake evidence are risking lives.
A major new report has concluded that fraud and fabrication is widespread throughout medical research. The practice has potentially devastating implications because doctors base treatment on published research. The Committee on Publication Ethics (COPE) was set up last year following mounting concern among editors of scientific publications that research studies contained faked results.It is thought that increased pressure to achieve results to obtain scarce funding resources has pushed many scientists into acting dishonestly. The COPE report cites 25 cases of scientific fraud. In one case a scientist who claimed to have transplanted black skin onto a white mouse had in fact simply coloured the mouse with a felt-tip pen.
A British Medical Association spokesman said that members of the committee had been approached by a “phenomenal” number of people revealing cases of fraud and misconduct, and that the problem was far more widespread than was first thought. “There is no doubt that there is phenomenal evidence to show that there is a lot of this occurring,” the spokesman said.
The committee has recommended that a national regulatory body be set up to monitor standards in scientific research. “The trouble is that there are a huge amount of people involved in research who are not doctors or medical people, and who are not registered with any professional body,” the spokesman said. “Therefore there is no recourse.”
So how do you, an average schmoe, figure out who to believe? Well, a scientific background is helpful, but if you don’t have one, you use your common sense. Here are some helpful tips about how to read and interpret scientific research.
Let’s say, perhaps, you are interested in a particular supplement – let’s call it Sloth-B-Gon. Perhaps you heard about Sloth-B-Gon from someone in the gym, or you see an ad for it, and you want to know more about it.
Rule 1. DO YOUR HOMEWORK.
If you found an unmarked pill in the parking lot, would you pick it up and eat it? Unless you are a recipient of a Darwin Award, or under three years old, probably not. So why would you take a supplement with unknown ingredients and effects?
Rule 2. COMPANIES LIE.
Supplement and fitness companies are under no obligation to demonstrate that any of their claims are true. They can lie like rugs. Also there is no Tooth Fairy and he’s never going to leave his wife for you.
Rule 3. ALWAYS GO BACK TO THE ORIGINAL SOURCE.
You know how when you talk to your family about An Issue and Sister A says one thing and Sister B says Sister A is a liar and then Mom says they are both full of it? Possibly this is just my family. In any case, the truth is usually in between somewhere and nobody’s interpretation can be fully trusted. If possible, go and look at the study that’s cited as evidence. You may find it’s been misquoted or too broadly interpreted. You may even find that it doesn’t exist.
When I was a child, my father sat me down and gave me some fatherly guidance about the world. Being a university professor, his advice was, “Always cite the primary research.” Thus far, dad’s sage words have never steered me wrong. But how do you go about even understanding the egghead mumbo jumbo?
Reading tips
Who is the author and where was the study done?
Look at where the study was performed. Was it for a company? Was it at a university? Go and look up the research on the university’s website. Again, in extreme cases, you may discover that the person doesn’t even exist.
Where was the study published?
Is it a scholarly source? Scholarly sources are peer reviewed. This means that other academic and clinical researchers in the field review the study critically before it makes it to print. They are usually journals, not magazines, newspapers, or websites. Journals have titles like Journal of Strength and Conditioning Research, not TightBodz Quarterly. Look up the journal. If you work at a university you can often get full text versions of the original article but even if not, searching on PubMed or Google Scholar will yield the abstract.
Who was studied?
Animal studies can suggest things about humans, but aren’t always directly applicable. People aren’t mice, or fruit flies, or fish, or even our closest relatives, primates. In an extreme case in March 2006, six human subjects of a drug trial in the UK ended up in intensive care with massive organ failure. The drug had been tested on animals and researchers were shocked – previously, everything had gone swimmingly and there were no indications that humans would experience such an effect. But in practice, they did.
Even if the subjects are humans, not all humans are the same. For example, untrained subjects show much different results from a training program than experienced athletes. For the purposes of testing an exercise protocol, they might as well be different species. Women may differ from men; older subjects may differ from younger ones; groups may differ by ethnicity. Many studies are done on college age students because that’s who’s close at hand for university-based researchers.
What was the study looking for?
How was the question asked? Imagine if you wanted to study the similarities between apples and oranges. Well, they’re both round, they’re both considered fruit, they both contain vitamin C. What if you wanted to study the differences? One is citrus; they tend to grow in different climates, and you eat one’s peel but not the other’s. The answer to “compare apples and oranges” is different, depending on what you’re looking for.
What was the sample size?
Is the study generated from individual anecdotes? If I said to you, “I’ve owned three cats and all of them were black, so therefore the only colour for cats must be black,” you’d think that was dumb, both because you know that three cats isn’t all cats; and you know that my experience isn’t everyone’s experience. As stats profs are fond of saying, the plural of anecdote isn’t data. If some guy in the gym swears that XYZ worked BIGTIME!!!! for him, that’s not the same as a controlled scientific study involving 1000 people.
What does the evidence say?
Is there a strong case? Something might be statistically significant, which means that it’s enough of a blip to make us go hmmm, but on the other hand, sometimes results aren’t all that. If I said to you, “Taking this drug makes me 3% less likely to want to kill you with an axe,” you’d still probably want a restraining order.
How was the study done?
Was there a control or comparison? Speaking of controls, that’s another important thing that sets good studies apart from sucky studies. Let’s say you are testing a supplement on a group of people. You could just give them the supplement, send them off to the gym, and see what happens. Six weeks later, they’re all stronger. How do you know it was the supplement and not just the effects of six weeks of training? Answer: you don’t.
For one thing, the placebo effect is really strong. People come up with all kinds of wacky reactions to fake pills and imaginary stimuli. When I was 12, I went away to sleepover camp. One night, we thought it would be cool if we rolled up pine needles in toilet paper and smoked it. We called it “bumwadda”. We all swore we were getting totally high. Well, maybe we were – who knows. But the point is, we all wanted to see what it was like to be high and we were darn well going to convince ourselves that we were.
Second, you need to be sure that the thing you’re studying is what’s responsible. If people are getting better after that six weeks of training, you need to have another group doing the same training who isn’t taking the supplement, so that you can compare. The gold standard of research is the double blind study. This means that people don’t know what they’re taking, and neither do the researchers. The study bias is reduced because neither the subjects nor the researchers know for sure which is which until the results are tallied.
What conclusions do the authors make and are these logical and appropriate?
One of the worst offenses in bad research or bad use of research is inappropriately generalizing or explaining study results. For example, when a study gets published about fruit fly behaviour, the reporter may speculate about how this applies to humans. The worst example I ever saw was a study about aggressive behaviour in wasps – the newspaper headline for the story read, “Are Men Born to Hockey Fight?” (this may be a Canadian thing – for all I know, Canadians are born to hockey fight – ya gotta get that sweater up over their head first so you can nail ’em with the kidney shots) Another way to screw this up is speculating on reasons for things that aren’t supported by evidence, e.g. that men are more likely to hog the TV remote because Man The Great Prehistoric Cave Hunter liked to hog rocks with dots on them. But basically, ask yourself: does the evidence support the conclusions? Do the conclusions raise more questions?
What is pseudoscience and how can I recognize it?
From The Onion: Revolutionary New Insoles Combine Five Forms Of Pseudoscience
Here’s a little piece that appeared on the Supertraining email list. I thought it was appropriate here.
The Seven Warning Signs of Bogus Science
By Robert L. Park
The National Aeronautics and Space Administration is investing close to a million dollars in an obscure Russian scientist’s antigravity machine, although it has failed every test and would violate the most
fundamental laws of nature. The Patent and Trademark Office recently issued Patent 6,362,718 for a physically impossible motionless electromagnetic generator, which is supposed to snatch free energy from a vacuum. And major power companies have sunk tens of millions of dollars into a scheme to produce energy by putting hydrogen atoms into a state below their ground state, a feat equivalent to mounting an expedition to explore the region south of the South Pole.
There is, alas, no scientific claim so preposterous that a scientist cannot be found to vouch for it. And many such claims end up in a court of law after they have cost some gullible person or corporation a lot
of money. How are juries to evaluate them?
Before 1993, court cases that hinged on the validity of scientific claims were usually decided simply by which expert witness the jury found more credible. Expert testimony often consisted of tortured
theoretical speculation with little or no supporting evidence. Jurors were bamboozled by technical gibberish they could not hope to follow, delivered by experts whose credentials they could not evaluate.
In 1993, however, with the Supreme Court’s landmark decision in Daubert v. Merrell Dow Pharmaceuticals, Inc. the situation began to change. The case involved Bendectin, the only morning-sickness medication ever approved by the Food and Drug Administration. It had been used by millions of women, and more than 30 published studies had found no evidence that it caused birth defects. Yet eight so-called experts were
willing to testify, in exchange for a fee from the Daubert family, that Bendectin might indeed cause birth defects.
In ruling that such testimony was not credible because of lack of supporting evidence, the court instructed federal judges to serve as “gatekeepers,” screening juries from testimony based on scientific nonsense. Recognizing that judges are not scientists, the court invited judges to experiment with ways to fulfill their gatekeeper responsibility.
Justice Stephen G. Breyer encouraged trial judges to appoint independent experts to help them. He noted that courts can turn to scientific organizations, like the National Academy of Sciences and the
American Association for the Advancement of Science, to identify neutral experts who could preview questionable scientific testimony and advise a judge on whether a jury should be exposed to it. Judges are
still concerned about meeting their responsibilities under the Daubert decision, and a group of them asked me how to recognize questionable scientific claims. What are the warning signs?
I have identified seven indicators that a scientific claim lies well outside the bounds of rational scientific discourse. Of course, they are only warning signs — even a claim with several of the signs could
be legitimate.
1. The discoverer pitches the claim directly to the media.
The integrity of science rests on the willingness of scientists to expose new ideas and findings to the scrutiny of other scientists. Thus, scientists expect their colleagues to reveal new findings to them initially. An attempt to bypass peer review by taking a new result directly to the media, and thence to the public, suggests that the work is unlikely to stand up to close examination by other scientists.
One notorious example is the claim made in 1989 by two chemists from the University of Utah, B. Stanley Pons and Martin Fleischmann, that they had discovered cold fusion — a way to produce nuclear fusion without expensive equipment. Scientists did not learn of the claim until they read reports of a news conference. Moreover, the announcement dealt largely with the economic potential of the discovery and was devoid of the sort of details that might have enabled other scientists to judge the strength of the claim or to repeat the experiment. (Ian Wilmut’s announcement that he had successfully cloned a sheep was just as public as Pons and Fleischmann’s claim, but in the case of cloning, abundant scientific details allowed scientists to judge the work’s validity.)
Some scientific claims avoid even the scrutiny of reporters by appearing in paid commercial advertisements. A health-food company marketed a dietary supplement called Vitamin O in full-page newspaper ads. Vitamin O turned out to be ordinary saltwater.
2. The discoverer says that a powerful establishment is trying to suppress his or her work.
The idea is that the establishment will presumably stop at nothing to suppress discoveries that might shift the balance of wealth and power in society. Often, the discoverer describes mainstream science as part of a larger conspiracy that includes industry and government. Claims that the oil companies are frustrating the invention of an automobile that runs on water, for instance, are a sure sign that the idea of such a car is baloney. In the case of cold fusion, Pons and Fleischmann blamed their cold reception on physicists who were protecting their own research in hot fusion.
3. The scientific effect involved is always at the very limit of detection.
Alas, there is never a clear photograph of a flying saucer, or the Loch Ness monster. All scientific measurements must contend with some level of background noise or statistical fluctuation. But if the signal-to-noise ratio cannot be improved, even in principle, the effect is probably not real and the work is not science.
Thousands of published papers in para-psychology, for example, claim to report verified instances of telepathy, psychokinesis, or precognition. But those effects show up only in tortured analyses of statistics. The researchers can find no way to boost the signal, which suggests that it isn’t really there.
4. Evidence for a discovery is anecdotal.
If modern science has learned anything in the past century, it is to distrust anecdotal evidence. Because anecdotes have a very strong emotional impact, they serve to keep superstitious beliefs alive in an age of science. The most important discovery of modern medicine is not vaccines or antibiotics, it is the randomized double-blind test, by means of which we know what works and what doesn’t. Contrary to the saying, “data” is not the plural of “anecdote.”
5. The discoverer says a belief is credible because it has endured for centuries.
There is a persistent myth that hundreds or even thousands of years ago, long before anyone knew that blood circulates throughout the body, or that germs cause disease, our ancestors possessed
miraculous remedies that modern science cannot understand. Much of what is termed “alternative medicine” is part of that myth. Ancient folk wisdom, rediscovered or repackaged, is unlikely to match the output of modern scientific laboratories.
6. The discoverer has worked in isolation.
The image of a lone genius who struggles in secrecy in an attic laboratory and ends up making a revolutionary breakthrough is a staple of Hollywood’s science-fiction films, but it is hard to find examples in real life. Scientific breakthroughs nowadays are almost always syntheses of the work of many scientists.
7. The discoverer must propose new laws of nature to explain an observation.
A new law of nature, invoked to explain some extraordinary result, must not conflict with what is already known. If we must change existing laws of nature or propose new laws to account for an observation, it is almost certainly wrong.
I began this list of warning signs to help federal judges detect scientific nonsense. But as I finished the list, I realized that in our increasingly technological society, spotting voodoo science is a skill
that every citizen should develop.
Robert L. Park is a professor of physics at the University of Maryland at College Park and the director of public information for the American Physical Society. He is the author of Voodoo Science: The Road From
Foolishness to Fraud (Oxford University Press, 2002).