Why Most Published Research Findings Are False

This is what I intuitively feel after reading a lot of scientific content as well and seeing the arguments based on published or no published literature. It is totally true that most research or scientific content is false. The best we can do is large randomized controlled trials. The worst is mendelian randomization.

Does that mean no one takes rapamycin ever? No but about 80% of the other stuff or the other arguments should probably be disregarded if the limitations are not clearly stated. No one bothers to state the limitations of what they say or write it seems.

3 Likes

Including John Ioannidisā€™ papers?

It reminds me of the old saying ā€œ50% of advertising expenditures are a complete waste of money, the problem is you donā€™t know which 50%ā€.

Ultimately you have to make decisions with uncertainty, and the best information available. Well-done scientific papers are the best sources we have, though of course imperfect and incomplete.

7 Likes

But most scientific content here is high uncertainty and low impact. It might be perceived as something different at first glance.

But that seems to be simply restating that in any large sample of research papers, there will always be a small percent that are high impact, high certainty.

Just by the fact that we discuss most new aging-related research papers will mean that the same distribution curve will be seen in what we discuss. I still think there is value in discussing the new papers related to aging. But yes, always good to keep in mind the sample size, nature of the study, source of the study, etc (factors that go into a ā€œhigh impactā€ or ā€œhigh certaintyā€ study.

3 Likes

So, Iā€™d like to say that this research finding seems false. I believe the correct number for false Western studies is about 15%, not 50% (see rebuttals below). If you want 50% or higher look to China, Russia and N. Korea. So, when the author of a study is falsifying the number of falsifications, Iā€™ll throw the whole heap in the rubbish bin. Hey, thereā€™s at least a 50% chance his research is false too!

If youā€™d like more information, thereā€™s a whole Wikipedia article here:

Hereā€™s a sample rebuttal:

Biostatisticians Jager and Leek criticized the model as being based on justifiable but arbitrary assumptions rather than empirical data, and did an investigation of their own which calculated that the false positive rate in biomedical studies was estimated to be around 14%, not over 50% as Ioannidis asserted.[12] Their paper was published in a 2014 special edition of the journal Biostatistics along with extended, supporting critiques from other statisticians. Leek summarized the key points of agreement as: when talking about the science-wise false discovery rate one has to bring data; there are different frameworks for estimating the science-wise false discovery rate; and ā€œit is pretty unlikely that most published research is falseā€, but that probably varies by oneā€™s definition of ā€œmostā€ and ā€œfalseā€.[13]

Statistician Ulrich Schimmack reinforced the importance of the empirical basis for models by noting the reported false discovery rate in some scientific fields is not the actual discovery rate because non-significant results are rarely reported. Ioannidisā€™s theoretical model fails to account for that, but when a statistical method (ā€œz-curveā€) to estimate the number of unpublished non-significant results is applied to two examples, the false positive rate is between 8% and 17%, not greater than 50%.[14]

However, itā€™s good to apply some skepticism here.

DeStriderā€™s Corollaries:

  1. RCT studies always do a Phase 1 safety study to determine if a substance is safe for human use. So all successful Phase 3 studies have proven short-term safety.
  2. Even if results are not real, there is always the placebo effect which confers about a 30% benefit.
  3. Given that there is high safety and a high chance of benefit (real or placebo) for a reported RCT trial, the only thing you are truly wasting by giving an RCT-tested supplement/medication/therapy a shot is money and time.
  4. Odds are higher that youā€™ll be able to have at least one winner if you try a wide range of ā€˜provenā€™ RCT-trialed substances.

Itā€™s Pascalā€™s Wager for each supplement and Iā€™ve got a lot of chips to spend at this casino.

Because I donā€™t want to cash in my chips early. :wink:

This is also the thought process behind venture capital. For every 20 they fund, they hope to get at least 1 winner and that winner makes up for all the failures.

4 Likes

So you think RCT data can not be manipulated to show greater benefits i.e. greater statistical difference?

What has changed your position on MR? I thought you were pretty religious about MR.

3 Likes

Depends what you call ā€œfalseā€. They might make mistakes, omit things, be biased, etc. and the paper would not be ā€œfalseā€ but lead to a wrong conclusion. This is also common in articles in the West. But yeah, Western papers tend to be of higher quality. (I donā€™t think that North Koreans publish papers) In general, I consider that about a third of papers are wrong, so I want 3 papers by different teams going in the same direction to be ā€œconvincedā€ of something. Below that, itā€™s just some signal.

1 Like

I think you need to watch the video. The point is that the cause is not fraud but pure chance. Like when you start testing with a low a priori chance you will find a lot of false positive results and the positive predictive value will be low. Besides that there are other biases, like the pressure to publish positive new results which leads to p-hacking and not publishing of negative results

4 Likes

I think you are mistaking them with drug approval phases.

Yes it can be manipulated or accidentally ā€œrightā€, that is why most RCTā€™s are false as well.

Meta-analysis are actually the mortgage-backed securities of Science, potent persuasion:

image

https://twitter.com/St_Rev/status/1761252847876772270

1 Like