What are some of the best biotechnology/longevity/neuro/biosecurity roadmaps that you have seen?

example inspiration: https://chatgpt.com/share/69580008-85b4-800c-9064-8adf39c10af6

one document: Bottlenecks of Aging — Amaranth Foundation
Roadmap is one example.
https://static1.squarespace.com/static/64becfbb6ea1fd445440d59d/t/65c584f0e1e3ad1f112d42cc/1707443453272/brain-aging-white-paper.pdf
https://www.sciencedirect.com/science/article/pii/S1934590925000992

Roadmaps that help smoothen out the coordination problem so people do not invest excess amounts of redundant effort on competition.

Ideally they would be like the SRC (semiconductor research corporation) roadmaps, which was enough to facilitate solving the “semiconductor manufacturing” coordination problem?

[i know epoch.ai does forecasting/roadmapping, but it didn’t produce “supercooperation”]. There is still much longevity people can learn from epoch.ai!

[too much detail can also impede readability/“action” and rationalists are known to be “low in action bias” - however - extremely detailed roadmapping can be enough to convince otherwise skeptical philanthropists to throw in funding if this level of detail is enough to “go over the edge” in convincing people who are a little more skeptical - and making “high detail of uncertainty more parseable” is an “easily-solved-with-2025-AI” issue]

I don’t know if roadmappers have historically been great at factoring in “surprisal” - I know Katja Grace has tried to map out historical examples of “discontinuous technological progress”, but many of the early rationalists (including Cotra’s 2020 paper) didn’t adequately factor in the “surprise” of transformer architecture.

Longevity roadmappers may also not be the best at factoring in “huge surprises” (iPSC reprogramming was a huge surprise that people who do “first principles thinking” would have likely missed out on) - there are other potential surprises in “gene delivery”/“gene therapy”/“multiplex editing”/“tissue engineering” (or whatever AI of the future may discover for us). Some sources of uncertainty can “cancel out long-run” [esp if you manage independent basis vectors in parallel to compensate for uncertainty in one research direction], some sources of uncertainty do not.

[high leverage/ROI/“effect multipliers” is also impt to model]. I think the https://www.aifuturesmodel.com/ people (esp Romeo Dean) are responsive to feedback and make first attempts to model how some sources of uncertainty may interact with other sources of uncertainty, even if the model is not complete. ai-2027 was not very convincing (and will probably turn out wrong), but integrity + being able to change your mind/resilience/adaptable is way more important and Daniel Kokotajlo is full of these traits (his pre-AI-2027 forecast of AI in 2026 was surprisingly on the mark for many of his bullet points)

AI Futures Model even tries to model human research taste!!

[evolutionary biology of aging is surprisingly good for forcing people to expand what their first principles are - as we are now discovering for bowhead whale and greenland shark]

==

Roadmapping is the opposite of the “greatness cannot be planned” approach Ken Stanley advocates (Ken Stanley is notoriously opposed to benchmarking). The success of the SRC still proves that “greatness can be planned” [much of it depends on your J vs P personality leaning that Sarah Constantin is v. attentive to], though Ken Stanley’s book is still worth reading.

https://x.com/i/status/2000222761273438390

https://www.lesswrong.com/posts/5T5FcCCEDxoAyCMiL/linkpost-visual-roadmap-to-strong-human-germline-engineering is a well-constructed roadmap (it even attracted the attention of a rare but potentially hyper-talented Stanford student…)

[sometimes you could also just do the richardngo thing of screen-recording your workflow which makes it way easier for AIs to fill you up on your blindspots/what you’re spending too much time on]

[you also don’t need to attach years to roadmaps, however, it’s still nice to show the parallel paths for current to flow in case one path gets bottlenecked]

(https://x.com/bravo_abad is just… good at forecasting sudden leaps in materials science, as there is substantial chance advancements in materials science/engineering is upstream of progress everywhere else). Some think AI will unbottleneck areas that previously had bottlenecks, but this is dependent on AI accelerating materials science [even the latest results from Until is a branch of materials science/condensed matter physics research, tho idk if its current progress needed the insights of the latest advancement in those areas].

[also I remember NRC astrobiology roadmaps: https://www.nationalacademies.org/read/26522/chapter/1#ix and https://www.liebertpub.com/doi/10.1089/ast.2008.0819?url_ver=Z39.88-2003&rfr_id=ori%3Arid%3Acrossref.org&rfr_dat=cr_pub++0pubmed ]. they’re inspiring in imagination, though maybe not necessarily level of detail/dependency mapping
[and a semi roadmap of an emerging field of AI agents: The AI Agent Index ]. NRC has produced A LOT of roadmaps that are kind of fun to read