In Brief
Just because a study is peer reviewed, it doesn't mean that it's right. Explore below some of the issues that come up when reading through scientific work.

Have you ever tried to interpret new research to figure out what the study means in the grand scheme of things? Well, maybe you’re smart and didn’t make any mistakes – but more likely, you’re like most humans and accidentally made one of these 10 mistakes.

Correlation VS Causation
Note: Correlation does not equal causation. That is the point of this image.

1. Wait! That’s Just One Study!

You wouldn’t judge all books just based on War and Peace or The Martian. And so, neither should you judge any topic based on just one study.

If you do it deliberately, it’s cherry-picking. If you do it by accident, it’s an example of the exception fallacy.

The well-worn and thoroughly discredited case of the measles, mumps, and rubella (MMR) vaccine causing autism serves as a great example of both of these.

People who blindly accepted Andrew Wakefield’s (now retracted) study – when all the other evidence was to the contrary – fell afoul of the exception fallacy. People who selectively used it to oppose vaccination were cherry-picking.

2. Significant Doesn’t Mean Important:

Some effects might well be statistically significant, but so tiny as to be useless in practice.

You know what they say about statistics? Flickr/Frits Ahlefeldt-Laurvig, CC BY-ND
You know what they say about statistics?
Flickr/Frits Ahlefeldt-Laurvig, CC BY-ND

Associations (like correlations) are great for falling foul of this, especially when studies have huge numbers of participants. Basically, if you have large numbers of participants in a study, significant associations tend to be plentiful, but not necessarily meaningful.

One example can be seen in a study of 22,000 people that found a significant (p<0.00001) association between people taking aspirin and a reduction in heart attacks, but the size of the result was minuscule.

The difference in the likelihood of heart attacks between those taking aspirin every day and those who weren’t was less than 1%. At this effect size – and considering the possible costs and risks associated with taking aspirin – it is dubious whether it is worth taking at all.

3. Effect Size Doesn’t Mean Useful

We might have a treatment that lowers our risk of a condition by 50%. But, if the risk of having that condition was already fairly low (say a lifetime risk of 0.002%), then reducing that might be a little pointless, especially if the treatment has side effects.

We can flip this around and use what is called Number Needed to Treat (NNT).

In normal conditions, if two random people out of 100,000 would get that condition during their lifetime, you’d need all 100,000 to take the treatment to reduce that number to one.

4. Are You Judging the Extremes By the Majority?

Biology and medical research are great for reminding us that not all trends are linear.

We all know that people with very high salt intakes have a greater risk of cardio-vascular disease than people with a moderate salt intake. But hey – people with a very low salt intake may also have a high risk of cardio-vascular disease.

The graph is U shaped, not just a line going straight up. The people at each end of the graph are probably doing different things.

5. Did You Maybe Even Want to Find That Effect?

Even without trying, we notice and give more credence to information that agrees with views we already hold. We are attuned to seeing and accepting things that confirm what we already know, think and believe.

There are numerous example of this confirmation bias but studies such as this reveal how disturbing the effect can be.

In this case, the more educated people believed a person to be, the lighter they (incorrectly) remembered that person’s skin was.

6. Were You Tricked by Sciencey Snake Oil?

You won’t be surprised to hear that sciencey-sounding stuff is seductive. Hey, even advertisers like to use our words!

But this is a real effect that clouds our ability to interpret research.

In one study, non-experts found even bad psychological explanations of behavior more convincing when they were associated with irrelevant neuroscience information. And if you add in a nice-and-shiny fMRI scan, look out!

7. Qualities Aren’t Quantities and Quantities Aren’t Qualities

For some reason, numbers feel more objective than adjectivally-laden descriptions of things. Numbers seem rational, words seem irrational. But sometimes numbers can confuse an issue and mislead an audience.

For example, we know that people don’t enjoy waiting in long queues at the bank. If we want to find out how to improve this, we could be tempted to measure waiting periods and then strive to try and reduce that time.

But in reality, you can only reduce the wait time so far. And a purely quantitative approach may miss other possibilities.

If you asked people to describe how waiting made them feel, you might discover it’s less about how long it takes, and more about how uncomfortable they are.

8. Models, By Definition are Not Perfect Representations of Reality

A common battle-line between climate change deniers and people who actually understand the evidence is the effectiveness and representation of climate models.

But we can use much simpler models to look at this. Just take the classic model of an atom. It’s frequently represented as a nice stable nucleus in the middle of a number of neatly orbiting electrons.

While this doesn’t reflect how an atom actually looks, it serves to explain fundamental aspects of the way atoms and their sub-elements work.

This doesn’t mean that people haven’t had misconceptions about atoms based on this simplified model. But these can be modified with further teaching, study, and experience.

9. Context Matters

Former US President Harry Truman once whinged about all his economists giving advice, but then immediately contradicted that with an “on the other hand” qualification.

Individual scientists – and scientific disciplines – might be great at providing advice from just one frame. But for any complex social, political or personal issue there are often multiple disciplines and multiple points of view to take into account.

To ponder this, we can look at bike helmet laws. It’s hard to deny that if someone has a bike accident and hits their head, they’ll be better off if they’re wearing a helmet.

Do bike helmet laws stop some people from taking up cycling? Image Credit: Flickr/Petar, CC BY-NC

But if we are interested in whole-of-society health benefits, there is research suggesting that a subset of the population will choose not to cycle at all if they are legally required to wear a helmet.

Balance this against the number of accidents where a helmet actually makes a difference to the health outcome, and now helmet use may in fact be negatively impacting overall public health.

Valid, reliable research can find that helmet laws are both good and bad for health.

10. Just Because it’s Peer Reviewed That Doesn’t Make it Right

Peer review is held as a gold standard in science (and other) research at the highest levels.

But, even if we assume that the reviewers made no mistakes or that there were no biases in the publication policies (or that there wasn’t any straight out deceit), an article appearing in a peer reviewed publication just means that the research is ready to be put out to the community of relevant experts for challenging, testing, and refining.

It does not mean that it’s perfect, complete or correct. Peer review is the beginning of a study’s active public life, not the culmination.

And finally …

Research is a human endeavor and as such, is subject to all the wonders and horrors of any other human endeavor.

Just like in other aspects of our lives, in the end, we have to make our own decisions. And sorry, appropriate use even of the world’s best study does not relieve us of this wonderful and terrible responsibility.

There will always be ambiguities that we have to wade through, so like any other human domain, do the best you can on your own, but if you get stuck, get some guidance directly from, or at least originally via, useful experts.

Will J Grant owns shares in a science communication consultancy. He has previously received funding from the Department of Industry. Rod Lamberts has received funding from the ARC in the past. He also holds shares in a science facilitation consultancy. This article comes from The Conversation.