This is how scientists use 'p-hacking', the tweaking of experiments, to get published

William Thomas Cain/Getty Images

The pressure come up with exciting breakthroughs is so enormous in the world of science that some push their experiments hard to come up with something new.

They call it p-hacking and involves re-running experiments in several different ways until the data gives up the results needed.

A study into p-hacking has found some scientists unknowingly tweak experiments and analysis to increase their chances of getting results which are easily published.

The research by scientists at The Australian National University (ANU) is the most comprehensive investigation into this type of publication bias.

P-hacking happens when researchers either consciously or unconsciously analyse their data multiple times or in multiple ways until they get a desired result.

If p-hacking is common, the exaggerated results could lead to misleading conclusions, even when evidence comes from multiple studies.

“We found evidence that p-hacking is happening throughout the life sciences,” said lead author Dr Megan Head from the ANU Research School of Biology.

The study used text mining to extract p-values — a number that indicates how likely it is that a result occurs by chance — from more than 100,000 research papers published around the world, spanning many scientific disciplines, including medicine, biology and psychology.

“Many researchers are not aware that certain methods could make some results seem more important than they are. They are just genuinely excited about finding something new and interesting,” Dr Head said.

“I think that pressure to publish is one factor driving this bias. As scientists we are judged by how many publications we have and the quality of the scientific journals they go in.”

Journals are more likely to publish experiments with new, interesting results.

The study found a high number of p-values that were only just over the traditional threshold that most scientists call statistically significant.

“This suggests that some scientists adjust their experimental design, datasets or statistical methods until they get a result that crosses the significance threshold,” she said.

“They might look at their results before an experiment is finished, or explore their data with lots of different statistical methods, without realising that this can lead to bias.”

The concern with p-hacking is that it could get in the way of forming accurate scientific conclusions, even when scientists review the evidence by combining results from multiple studies.

The research, he Extent and Consequences of P-Hacking in Science, is published in the journal PLOS Biology.

NOW WATCH: Briefing videos

Business Insider Emails & Alerts

Site highlights each day to your inbox.

Follow Business Insider Australia on Facebook, Twitter, LinkedIn, and Instagram.