[the chances of this are probably >50% in 20-30 years though with still wide uncertainty]
Hinton already says “5-20 years” [and he would he surprised if not in that window]
https://x.com/8teAPi/status/1845687245560504703
Dario Amodei is now on record [and it could come in as little as 1.5 years]. Algorithmic improvements (see epochAI, liquid AI) mean we can probably get there faster than what scaling predicts… (this is such a vibe shift from just 4 years ago when people were still citing Ajeya Cotra’s essay on evolutionary AI computation, which had longer timelines based on evolutionary compute, but that would still roughly be within the healthspans of millennials, though even then I recognized that Ajeya Cotra’s limits were conservative/upper limits that didn’t make use of algorithmic improvements)
But most things in life are harder than in theory, and many theoretically possible things have just been intractably difficult for some reason or another (fusion power, flying cars, nanotech, etc). If self-play/synthetic data doesn’t work, the models look fucked - you’re never gonna get anywhere near that platonic irreducible loss. Also, the theoretical reason to expect scaling to keep working are murky, and the benchmarks on which scaling seems to lead to better performance have debatable generality.
So my tentative probabilities are: 70%: scaling + algorithmic progress + hardware advances will get us to AGI by 2040. 30%: the skeptic is right - LLMs and anything even roughly in that vein is fucked.
I’m probably missing some crucial evidence - the AI labs are simply not releasing that much research, since any insights about the “science of AI” would leak ideas relevant to building the AGI. A friend who is a researcher at one of these labs told me that he misses his undergrad habit of winding down with a bunch of papers - nowadays, nothing worth reading is published. For this reason, I assume that the things I don’t know would shorten my timelines.
===
I remember in the late 2010s, people still thought quantum computing would play a role in the speedup but now it’s regarded as “not essential”. Still The AI Boom is good for Quantum Tech • Quantonation
==
for much of what it’s worth, this makes me care much less about cryonics and even replacement organs (since timescales of those are personally many decades away for me). Reprogramming is still relevant, as are the longevity-slowing drugs, but ALL the attention is in it, so my attention is best spent elsewhere. It’s still worth spending extra money to slow aging (eg get the salad even if it’s more expensive, though I don’t know what the tradeoff is when it’s salad in plastics vs higher-calorie beans/almonds NOT in plastics. [i’ve noticed that *oranges* often are at the optimum frontier of ALL tradeoffs, especially at airports and convention centers]
I also stress much less over dying of old age - however - I still have to be super-careful about diet b/c it’s still socially and personally important for me to remain cute (esp b/c due to something in my brain, I seem neurologically incapable of being anything other than “broad”/perpetually early-stage - this may turn out to be my unique comparative advantage but it can only sustain if I do my utmost to preserve my youthspan). That I’ve grown resistant to semaglutide is non-optimal, but maybe I should just do higher doses
[on the frontier, it may be better to spend more time around neurotech and AI people]. Especially neurotech people if your mental health still isn’t the highest [this is why I’m actively talking to TMS/tFUS people… I may end up spending over $7500 for TMS at some point]
for some people (esp some fringe-alignment researchers), faster timelines tips the balance into them taking more amphetamines (or even methamphetamine), though they don’t work on me as well as they used to so I’m just drinking more tea/coffee
[since I don’t seem to tolerate semaglutide/amphetamines to the extent I used to, the real solution is just the more “normie” solution of extreme quantities of coffee, particularly the cold-brewed kind]
I will add that there’s still much reason to be skeptical, that “AGI=>longevity people” use A LOT of handwavy logic that doesn’t yet convince most longevity researchers of the feasibility of LEV in their lifetimes, many technologies in biosciences are way harder than people think [ESPECIALLY if you want to minimize the risk of dying from the cumulative sum of all the interventions you do including cell therapies!], and that now the question is whether or not chances of “LEV” in our lifetimes is now higher or lower than 50%, rather than very tiny. There is still strong reason to be cautious (even more cautious than before [cf Richard Ngo on taking fewer dumb risks as ASI approaches]) and to not do dumb things like overeat or be the first to try out potentially risky gene/cell therapies [though there are many who are more in need of those than you].
also, it may be worth spending more on what you need earlier than later (including 8TB hard drives, the fastest PCs and augmented reality technologies you can get, since right now the time value of money is higher than ever, and the amount of money you have now probably won’t matter that much in the far future IF we get superabundance). This also means always buy the salad or lentil soup even if it’s more expensive.
Longevity, AI, and computational biology now attract so much attention and investment that most progress in those is now “baked in” and it may be more worthwhile to spend your time elsewhere (this is what some refer to as “the plot”) [think of the potentially super-rare attributes you may have that AI could potentially amplify soon…]