Low power is the norm for almost all fields, including neuroscience, political science, environmental science, medicine, or breast cancer, glaucoma, rheumatoid arthritis, Alzheimer’s, epilepsy, multiple sclerosis, and Parkinson’s research. When performing a meta-analysis, you are almost certainly working with underpowered research, and meta-analytic results will reflect this. Meta-analysis and corrections for publication bias will only be able to go as far as the provided data allows, and if the quality is low enough, all that can be obtained is a biased and unrealistic result.
As noted above, no amount of correction solves these problems. “Garbage in, garbage out” is a problem that meta-analysis cannot solve; to get around it requires new studies, not the tired reanalysis of garbage. But if one decides to check the effect of study quality as a moderator of meta-analytic effects, they may think they can handle quality issues. How well they’ll be able to do depends on how well they’ve coded study quality with respect to how dimensions of study quality varied with respect to the estimate used in the meta-analysis.
Peer review is not magical. If you’ve ever participated in it or been the subject of it, you’re probably aware of how bad it can get. As many have recently learned from the preprint revolution, it also doesn’t seem to matter for publication quality. The studies I mentioned in the previous section on fraud all passed peer review and it’s almost certain that every bad study or meta-study you’ve ever read did too.
The cachet earned by peer review is undeserved. It does not protect against problems and it’s not clear it has any benefits whatsoever when it comes to keeping research credible. Because peer review affects individual studies heterogeneously, it can also scarcely make a dent in keeping meta-analyses credible. The meta-analyst has to trust that peer review benefitted every study in their analysis, but if, say, a reviewer preference for significant results affected the literature, it could have been the source of publication bias. A preference for any feature by any reviewer of any of the published or unpublished studies in a literature could be similarly harmful. Significance is just one feature that there’s a common preference for.
When it comes to reviewing meta-analyses, peer reviewers could theoretically read through every study cited in a meta-analysis and suggest how to code up study quality or which studies should be kept and removed. Ideally, they would; realistically, when there are a lot of studies, that’s far too much to ask for. And you usually won’t know if it could have helped in any individual case or for meta-analyses because most peer reviews are not publicly reported. Peer review is a black box. If you don’t take expert’s words for granted, why would you trust it?
Peer review is simply not something that helps the man of many studies. At best, it protects him when the meta-analysis is done poorly enough that reviewers notice and do something like telling the researchers being reviewed to change their estimator. If they tell them to seek publication elsewhere, the researchers could keep going until they meet credulous enough reviewers and get their garbage published.
Because of how little evidence there is that peer review matters, I doubt it helps the man of one or many studies often enough to be given any thought.