Julia Belluz, Brad Plumer & Brian Resnick so v Voxu objavili izjemno dobro analizo največjih problemov sodobne znanosti na podlagi ankete med znanstveniki. Sodobna znanost je v velikih težavah, ker je finančno podhranjena, kar vodi v resne težave glede zanesljivosti objavljenih znanstvenih ugotovitev. En vidik problema, kadar je denarja za znanost premalo, je v tem, da se znanstveniki, da bi lahko napredovali, zatekajo k “seksi”, popularnim, instantnim temam in k statistično značilnim (pozitivnim) ugotovitvam, ki jih je lažje objaviti (negativne rezultate, ki so dejansko zanimivi, je izjemno težko objaviti). Drugi vidik problema pomanjkanja denarja je, da se znanstveniki ozirajo za denarjem k industriji, k zasebnim sponzorjem, kar pa pozvroča konflikt interesov in, kot kažejo analize, večjo pristranost objavljenih ugotovitev. Tretji problem finančne podhranjenosti pa je v tem, da znanstveniki polovico svojega časa namenjajo za pisanje raziskovalnih predlogov za financiranje, namesto za raziskovanje in poučevanje. Gre za univerzalni problem.
Ampak to je šele prvi izmed glavnih problemov. Sledijo namreč problemi s slabo oblikovanimi analizami, s pomanjkanjem možnosti repliciranja rezultatov, invalidnim sistemom recenziranja, zaklenjenostjo znanstvenih študij za plačilnim zidom založnikov, slabo komunikacijo ugotovitev in stresnostjo znanstvene kariere za mlade zanstvenike.
Spodaj je nekaj zanimivih izsekov iz sicer dokaj dolge in zelo zanimive analize. Toplo priporočam
Seven biggest problems in science:
- Academia has a huge money problem
- Too many studies are poorly designed
- Replicating results is crucial — and rare
- Peer review is broken
- Too much science is locked behind paywalls
- Science is poorly communicated
- Life as a young academic is incredibly stressful
The scientific process, in its ideal form, is elegant: Ask a question, set up an objective test, and get an answer. Repeat. Science is rarely practiced to that ideal. But Copernicus believed in that ideal. So did the rocket scientists behind the moon landing.
But nowadays, our respondents told us, the process is riddled with conflict. Scientists say they’re forced to prioritize self-preservation over pursuing the best questions and uncovering meaningful truths.
“I feel torn between asking questions that I know will lead to statistical significance and asking questions that matter,” says Kathryn Bradshaw, a 27-year-old graduate student of counseling at the University of North Dakota.
Today, scientists’ success often isn’t measured by the quality of their questions or the rigor of their methods. It’s instead measured by how much grant money they win, the number of studies they publish, and how they spin their findings to appeal to the public.
Scientists often learn more from studies that fail. But failed studies can mean career death. So instead, they’re incentivized to generate positive results they can publish. And the phrase “publish or perish” hangs over nearly every decision. It’s a nagging whisper, like a Jedi’s path to the dark side.
“With funding from NIH, USDA, and foundations so limited … researchers feel obligated — or willingly seek — food industry support. The frequent result? Conflicts of interest.” (Marion Nestle, food politics professor, New York University)
Already, much of nutrition science, for instance, is funded by the food industry — an inherent conflict of interest. And the vast majority of drug clinical trials are funded by drugmakers. Studies have found that private industry–funded research tends to yield conclusions that are more favorable to the sponsors.
Finally, all of this grant writing is a huge time suck, taking resources away from the actual scientific work. Tyler Josephson, an engineering graduate student at the University of Delaware, writes that many professors he knows spend 50 percent of their time writing grant proposals. “Imagine,” he asks, “what they could do with more time to devote to teaching and research?”
It’s easy to see how these problems in funding kick off a vicious cycle. To be more competitive for grants, scientists have to have published work. To have published work, they need positive (i.e., statistically significant) results. That puts pressure on scientists to pick “safe” topics that will yield a publishable conclusion — or, worse, may bias their research toward significant results.
Alternatively, researchers in the journal mBio recently called for a lottery-style system. Proposals would be measured on their merits, but then a computer would randomly choose which get funded.
“Although we recognize that some scientists will cringe at the thought of allocating funds by lottery,” the authors of the mBio piece write, “the available evidence suggests that the system is already in essence a lottery without the benefits of being random.” Pure randomness would at least reduce some of the perverse incentives at play in jockeying for money.
There are also some ideas out there to minimize conflicts of interest from industry funding. Recently, in PLOS Medicine, Stanford epidemiologist John Ioannidis suggested that pharmaceutical companies ought to pool the money they use to fund drug research, to be allocated to scientists who then have no exchange with industry during study design and execution. This way, scientists could still get funding for work crucial for drug approvals — but without the pressures that can skew results.
“I have incredible amounts of stress that maybe once I finish analyzing the data, it will not look significant enough for me to defend,” writes Jess Kautz, a PhD student at the University of Arizona. “And if I get back mediocre results, there’s going to be incredible pressure to present it as a good result so they can get me out the door. At this moment, with all this in my mind, it is making me wonder whether I could give an intellectually honest assessment of my own work.”
Increasingly, meta-researchers (who conduct research on research) are realizing that scientists often do find little ways to hype up their own results — and they’re not always doing it consciously. Among the most famous examples is a technique called “p-hacking,” in which researchers test their data against many hypotheses and only report those that have statistically significant results.
In a recent study, which tracked the misuse of p-values in biomedical journals, meta-researchers found “an epidemic” of statistical significance: 96 percent of the papers that included a p-value in their abstracts boasted statistically significant results.
That seems awfully suspicious. It suggests the biomedical community has been chasing statistical significance, potentially giving dubious results the appearance of validity through techniques like p-hacking — or simply suppressing important results that don’t look significant enough. Fewer studies share effect sizes (which arguably gives a better indication of how meaningful a result might be) or discuss measures of uncertainty.
“I think the one thing that would have the biggest impact is removing publication bias: judging papers by the quality of questions, quality of method, and soundness of analyses, but not on the results themselves,” writes Michael Inzlicht, a University of Toronto psychology and neuroscience professor.
Some journals are already embracing this sort of research. PLOS One, for example, makes a point of accepting negative studies (in which a scientist conducts a careful experiment and finds nothing) for publication, as does the aptly named Journal of Negative Results in Biomedicine.
More transparency would also help, writes Daniel Simons, a professor of psychology at the University of Illinois. Here’s one example: ClinicalTrials.gov, a site run by the NIH, allows researchers to register their study design and methods ahead of time and then publicly record their progress. That makes it more difficult for scientists to hide experiments that didn’t produce the results they wanted. (The site now holds information for more than 180,000 studies in 180 countries.)