The Editors in Chief of the journal certainly don’t think it’s a “horrible paper”- as they picked that very paper to write the editorial for their journal issue, commenting: “That uncertainty does not detract from the importance of this remarkably thorough and compelling paper.”
So who is correct? The reviewers and editors in chief - all research scientists in biology/medicine, most with their own labs? Or the cardiologist/biostatician? I could understand a more nuanced disagreement. But “pretty horrible paper” is largely the polar opposite pov.
The 3 nematode strains used (living nematodes, not just nematode cells by the way) are standard models for AD researched and provided by a national lab designated by the NIH.
I actually agree, that animal models in AD in general are not of much value. But that’s a weakness ingrained in all AD research - not just this paper. Therefore papers using animal models in isolation can be dismissed, until the substance involves human level data.
I can only reflect what I can find online on the database:
it seems to be a more general database, having access to patient data from US based health insurers (medication claims data as well as diagnosis) and is marketed as a research tool for observational studies. Nowadays there are a handful of better, government databases available (2 in the US, 1 in UK, China, South Korea) - but they require a somewhat lengthy vetting process and cooperation with a national research institute (i.e. most published observational research is done on different data).
With analytical strategy I wager you’re talking about more details about the regression analysis, statistical tests used and potential sensitivity analysis? I agree that the authors should have been more detailed in describing that part. Though that doesn’t mean that 1.) they didn’t do it or 2.) they don’t know what they are doing - as it’s a very long paper already, the reviewers/editors may have just told them to cut it short (i.e. the reviewers know about the details and approved for publication based on that)
.
.
.
As the editors note, the authors are remarkably thorough in their methodological approach - therefore to contrast it again:
commonly some researcher has a pet-substance or data from mice where the substance “works” - and is doing a an observational study to gather more evidence. Or he simply was doing regression analysis for potential effects of various substances on a condition. Fishing for associations. Simply by chance, you will find effects. Therefore in a next step you need to pin down the mechanisms. Which often is surprisingly difficult in AD.
The authors are doing the opposite. They are doing extensive work in-vitro in human cells and tissue surrogate, in human brain slices of AD and non-AD people and in nematodes to detail a potential mechanism. Next they calculate out of data for over 1.800 approved drugs, that EZ would inhibit that mechanism (modern AI driven models can do that easily) - which they test in nematodes and neuronal tissue. Their prediction: EZ should have an effect in AD, based on our novel mechanism. That’s no fishing for associations. But picking one candidate out of 1800 potential candidates based on comprehensive work - before doing the observational study. Combine that with the truly crazy effect size and cheap price and safety.
.
.
Do I think, that EZ is the most promising candidate out there? No - I think that place is currently reserved for Lithium, in particular after we get the data of the LATTICE trial.
But if the observational part of the EZ study can replicated in another observational dataset it surely is a candidate above almost everything else - even if they find only half the effect size. The authors themselves are working on an RCT - but that would take a long time.