Scientists Must Replicate Findings, Ioannidis Says
Science isn’t perfect. But, it’s still one of the best things to happen to humankind. However, researchers must strive to improve the robustness, efficiency and transparency of their studies, said Dr. John Ioannidis at the Robert S. Gordon, Jr. Lecture in Epidemiology held recently in Masur Auditorium.
“We need to find out how we can best perform, communicate, verify, evaluate and reward research,” said Ioannidis, professor of medicine, health research and policy, biomedical data science and statistics at Stanford University and co-director of the Meta-Research Innovation Center at Stanford.
He said the dominant narrative in biomedical research nowadays is that there is “an over-supply of major true discoveries.” But that simply isn’t the case. Each year, very few new drugs are approved, even though some of the “brightest minds” in the world work in biomedical research.
In a study of highly cited published clinical research, he found it took, on average, 25 to 30 years before a treatment went to market. There are exceptions, however. One was a clinical study that demonstrated the efficacy of triple-drug therapy including a protease inhibitor to treat HIV infection. He was proud to be involved in that trial when he was working at NIH in the 1990s. That trial was published within only 4 years from the time bench research had allowed design of a protease inhibitor.
Most original scientific discoveries come from small studies, where biases are very common. Additionally, many scientists work in fields where the odds of success are low. When too many basic scientific findings are essentially wrong, the results will “lead people astray” and waste resources downstream.
He believes scientists must replicate potential new discoveries and see what “survives different efforts to reproduce these results either exactly the same way or with different angles of triangulation.” Reproducibility studies can show whether study results reveal false positives or are exaggerated.
Reproducibility can be grouped into three clusters, Ioannidis noted. One is reproducibility of methods, which means “to repeat exactly as possible the experiment and computational procedures.” Next is reproducibility of results, which means “we’re doing another study on new participants, samples and observations and we hope to get a result that is consistent, compatible—ideally as close as the original.” The final one is “reproducibility of inference,” which means scientists ask others about their conclusions. They may disagree about what the results mean.
Attitudes toward replication have changed over the past decade. Ioannidis says industry has led the change in preclinical research because they spent millions of dollars on experiments that led nowhere. Several companies launched reproducibility checks on highly cited papers published from top academic institutions. They found most of the results could not be reproduced.
Although reproducibility efforts are becoming more accepted, they can be “tricky” and “emotional” because investigators of the original work might fiercely challenge the results, with their careers and reputations at stake.
Millions of scientists write research papers. Ioannidis believes they must register their studies, unless they are admitted to be exploratory. Openness would also improve the chances that other scientists can see if the research holds up. Studies should also disclose any conflicts of interests, to become more transparent.
There has also been progress in sharing data since 2015. Before then, it was rarely done. Several medical journals, for instance, have changed their policies to encourage data-sharing. More work must be done to create a culture in which researchers freely share their data, said Ioannidis.
“We need to find opportunities to change the way we do science in our everyday environment,” he concluded.