AGI/AI/the singularity (drug synthesis + clinical trials + technology acceleration) will it come fast enough?

It’s an important topic for all of us.

Matthew Barnett and John Wentworth (both who used to post a lot of longevity on LW) have increasingly bullish timelines and are focusing more on AI than longevity. Johannes Hagemann is also spending more of his time on AI/scaling curves than longevity.

Noah Weber has one of the best guides ever

Just look at how many benchmarks have been razed in protein structure prediction this year

1 Like

Computationally oriented area will likely continue to move quickly - the protein folding as you mentioned, but also look at ChatGPT, and the AI drawing sites like https://stableboost.ai/

But the process of translating things to therapeutics / small molecule drugs, and other approaches in biotech / Pharma always takes much longer than anticipated… so I’m doubtful about the entire concept of the singularity, as it relates to longevity, biotech and drug development.

LEV is an interesting concept, but we have no real data to suggest it will happen anytime soon.

1 Like

It takes longer but developments may still come with a time lag of 10 to 15 years. Downstream effects may come much sooner, especially in countries with less regulation that want to have first mover advantage (praxis society/prospera might implement balaji’s network state + deregulated trials/testing [1]). Also more comprehensive longitudinal monitoring of individuals (there are scaling laws that imply exponential cost decreases to omics technologies as we already see in genome/transcriptome sequencing) means more people can self-experiment with lower risk (continual monitoring means they can cut off interventions as soon as they find “something” problematic, or as soon as they notice they work/do not work)

Someone may finally develop a solution that rewards ppl for taking the risk of self experiments (it may involve Blockchain economics) - this will make them happen more often

downstream effects of AGI may be so large as to convince people of things they might otherwise not be convinced of (eg deregulation of new edge cases)

Robotics has made very surprising advances just this year (keerthana paper is revealing)

Fwiw ppl I highly trust are updating their estimates of the singularity - some as aggressive as by the end of this decade

Automated drug synthesis may become a thing. Clinical trials have been becoming more expensive over time - HOWEVER - there are now attempts to decrease their cost - esp in prospera - and as omics technologies become way cheaper, smaller groups can do their own trials, audit them, and (in combination with automatic drug synthesis) produce results convincing enough for some people to do interventions way before they get FDA approved (already this is kind of happening with some exotic treatments like plasma rejuvenation and exosomes)

Adept.ai (roon says [with SOME point] the ppl behind it may be more important than any Indian scientist after Bose/Chandrasekhar) may provide the grounding for easy automation of anything with an API (this includes all the “biology automation API” startups that are being bred - eg Mark Zhang’s or Dhash’s or whatever). Also the new Ora startup by Mitchell Lee will show kinds of testing/analytics that will become possible soon (it’s not just in C elegans - the aging analytics algorithms can also be applied to mice and larger organisms)

Bryan Johnson is already trying to recruit a few people to do super-detailed/precise analytics of their own bodies (after intervention)

This year surprised many expert observers like crazy. 2023 will be no different (and we may see even further acceleration). We will see surprising new developments. Biological data is messy and it may be harder to do RLHF on it (with longer wait times), but smg will happen

ALSO, combinatorial experiments (and better measurements/analytics) makes it much easier to develop a first-principles understanding of biology (esp protein function/protein creation) => makes it easier to find targeted interventions down the line [and esp the interventions that robustly work across a wide range of human cell lines that go WAY beyond the commonly used ones]. The cult of statistical significance has made everyone use the same model organisms with minimal variation within themselves, but efforts IN THE GENERAL DIRECTION of seemay chou’s arcadia (+protocols.io) will make it easier to do experiments that work on a wider range of organisms/genetic backgrounds => find the space of interventions that have lower likelihoods of causing adverse reactions. People are already testing mTOR inhibitors (and delivery methods) that may work better than oral rapamycin and this could (at least) buy us a few more years.

Also, efforts in the general direction of [all of the above] will make it easier to develop+culture human cell lines more robustly and (hopefully) make up for some of their weaknesses (eg a Sturm paper says they have much short lifespans).

There will be a bot that can automatically do venipunctures/injections at some pt too. Is there really anything that can “ban” this before getting “FDA-approved”, esp if this is only advertised informally (and esp as 3D printing advances make these things easier to produce? [it’s the algorithm that’s more important than the machine])

[1] Prospera is one example, but there may be governments elsewhere that deregulate clinical trials - if it was so easy for individual countries to finally decriminalize psychedelics, this isn’t that much of a stretch. I know the FDA/NIH take forever to reform, but for another country it may not be too much to (with the help of AI) draft up a new set of safe clinical trials that make better use of “-omics” + (quantified-self/motion-or-video-analytics data [like ora or like the startup by morten’s postdoc or leon pesha’s daphnia]) data to test drugs with way fewer people and way lower costs.

on off-label use of devices: impt to make the distinction between fda-cleared and fda-approved (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6220002/ ). fda-cleared is a lower bar

some treatments (like stem cell treatments) have many clinics with minimal oversight and are risky for now (I know oversight will become easier over time, esp b/c it is very outsourceable to an independent institution - I do not yet know of one that does it, but it MAY [or may not[ come v. soon). For all the other experimental treatments, there may be external clinics that offer them (independent states that allow for therapy with certain Schedule I drugs show that this is possible)].

Peptides are an experimental treatment that some try, and while there are some attempts to crack down on some suppliers of peptides, new ones come up (also peptide synthesis becomes easier over time).

Organoids will be easier to grow (see Herophilus). I don’t know how long it will take for them to produce data translatable enough to significantly shorten the drug discovery pipeline, but they are a source of new upside risk.

==
Also, like, if we make significant progress in transformer interpretability next year, it may also (be correlated with [though certainly far from guarantees - given the way increased complexity of the interactome]) significant progress in interpretability of biological modules, esp automated interpretability (more data + better database design + automated addition/updating of biological data into non-static [[intervention database]] may be one route to increase interpretability of studies with lower sample sizes) + makes all published diagrams auto-update when new data is added to the database.

IDK if AI startup Unlearn adds $50M for better, faster, smaller & cheaper clinical trials - MedCity News will work [always note high failure rate of startups], but it has many steps in the right direction.

I don’t think we’re getting to LEV super-near-term, but some people have experienced very surprising results from stem cell therapies/exosomes alone, and this may buy people “enough time” to make it by the time AI gives the world a chance to make something new that changes the world as much as the Internet did. The 2010s felt like a huge decade of stagnation at the translational/Internet level (though I know that it really wasn’t at the pre-translational level), but AI will make things VERY interesting near-term.

more reading:

2 Likes

Matt Kaeberlein’s startup is doing some really cool stuff that will be testing thousands and thousands of compounds in an automated way.

People will debate whether we’ve really reached AGI for awhile, but in the meantime AI is getting better and better and is going to be doing some crazy things soon. AlphaFold but for everything.

After the singularity, all bets are off. Not to be too dramatic, but true singularity AI is not going to be taking orders from us lowly humans.

1 Like

I think the hardest problems in biology/biotech come from the dual problems of control/delivery and highly-discriminative sensing, esp when trying to maintain control of molecules where collisions make the molecule jitterly and not-super-easy to localize.

Evolving proteins (or protein motifs) for both sensing and control/delivery (and briefly “opening up” rigid parts of the cell => maybe ultrasound can do this) is the first step for making highly targeted interventions. I don’t know if proteins can do everything, but they can do many things, and progress in protein design (+protein evolution) has taken off super-rapidly relative to process in more complicated cellular stuff.

I know there are some blunt force instruments to deliver “repair molecules” attached to “damage detection motifs” (a good example, presented at Berlin this year - was Elastin Therapeutics). It’s unclear if Elastin Therapeutics will work (or if the therapeutic is dynamic enough to pivot the “repair signal” to the damaged elastin, but it’s in the general direction of future trends that are impt.

anyways, automated labs will capture the entire git history of experiments (and this will be easier than “inverse problems” of credit assignment)

==

=> THIS https://twitter.com/CellRaiser_/status/1592909670607618048

https://twitter.com/MichaelTrazzi/status/1621680844917309441?t=AUt4esrWv3GcA9czmHamXw&s=19

Despite the plausibility of “AI singularity” not being particularly evidenced - I’ll mention it seems plausible to me within centuries rather than within 2 decades, and more likely via “simulated evolution” (despite being a hard unsolved problem) rather than the current wave of artificial/binary neural networks.

You should make a better case with evidence and context while avoiding over-exuberance that “this wave is it”.

I’ll also mention I got a sneak insider peek at Google’s Bard since your article is telling you to buy Google stock without quality evidence (which might be speculating in the right direction but for the wrong reasons). Not particularly impressed. The funny thing is some of the team members have devolved to using Bard to look up their own name (ie Who is Alex K Chen?) with hilarious yet “poor” results. Basically better than ChatGPT at charmingly offering safe answers with less “safety” loopholes. You should try to get insider access to reality for the cutting-edge if you haven’t. It’s easy to imagine without knowing what’s actually going on.

“Robotic advancements” are not the same as robots being able to do fine motor movements.

1 Like