AGI/ASI Timelines thread (AGI/ASI may solve longevity if it doesn't "kill us all" first)

https://medium.com/@rmaruy3/a-map-for-the-ai-in-for-science-discourse-57c352f527a6

Those images are mediocre tbh, frontier is at least 1 year ahead, try Sora or Nano Banana Pro, which are indistinguishable from real images.

You are probably right, as far as fine details go, Nano Banana Pro is exceptional, I tried again the theme of a scantily dressed woman in a lush garden with flowers and a water pool. Although, by western standards, we can hardly say the woman is dressed scantily, she’s rather fully dressed in a tropical garb.

I think that Gemini and Grok offer two distinct image generators, the former extremely precise and able to insert text (a feature which now has been evidently improved), the other provides multiple iterations of the same theme, with guided animation at the end with a full, short video clip as an output.

The bottom line on which I think we can all agree is that these AI tools are improving exponentially, as the other text-based tools are doing.

This suggests that it is plausible to think that AGI may be around the corner. That’s in the field of imprecise probabilities, but sounds more and more plausible by the day.

Now, just out of curiosity since we’re discussing image generators. I gave the same prompt to Grok. The degree of realism does seem lower, but it answered maybe more correctly, prioritizing the first request in the list: ‘A woman, scantily dressed’. It also provided dozens of pictures with different women, among which the one I posted is one of the more dressed. Plus, I can have a clip of it. Different generators, different solutions, different requirements (like having text for scientific illustrations).

image

1 Like

https://x.com/i/status/2001063756445733346

I half believe him… [only b/c AGI timelines are so aggressive now]

BJ does look a bit rejuvenated recently or am I tripping?

I’ve been watching interviews of a guy in his early 30’s and when I saw this my brain reminded me of him.

image

1 Like

Low picture quality. His face still reminds me of a 40-something guy. It lacks the features typically associated with a younger person. The hairline also looks rather high and thin.

also opus4.5 really broke the METR curves, this really is the closest evidence for the singularity being near

https://x.com/rickwierenga/status/1976326164877885843

Dario was predicting powerful AI early 2026 last year:

Obviously, many people are skeptical that powerful AI will be built soon and some are skeptical that it will ever be built at all. I think it could come as early as 2026, though there are also ways it could take much longer.

AI companies might redirect the compute used for training the model to running a million instances of it by 2027:

  • The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed. It may however be limited by the response time of the physical world or of software it interacts with.

  • Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.

But Dario doesn’t believe in an instant transformation / Singularity:

First, you might think that the world would be instantly transformed on the scale of seconds or days (“the Singularity”), as superior intelligence builds on itself and solves every possible scientific, engineering, and operational task almost immediately. The problem with this is that there are real physical and practical limits, for example around building hardware or conducting biological experiments. Even a new country of geniuses would hit up against these limits. Intelligence may be very powerful, but it isn’t magic fairy dust.

Relevant thread: We Have AGI Now? It's Command-Line Agents like Opus 4.5

2 Likes

https://x.com/karpathy/status/2004607146781278521

[a change in mind from just a few months ago, when he was more bearish than many in Claude’s orbit…]

Sam Rodriques wrote smg in nytimes, he doesn’t think AI will solve “human disease” “soon”, but he also “hopes to be proven wrong”

Also Kenny Workman and Hannah Le just posted an arxiv paper…

They updated their models

[estimate of AGI now delayed by like 4 years]

adam marblestone interviewed by dwarkesh patel, says that his research program probably would “get obsolete” if AGI moved really fast. adam marblestone has ~10 year timelines [not far off from what jacob steinhardt said they would be in 2022]

Meh, given (a) Chinese capability progress, (b) some chinese progress bleeding into American progress [even if simply by forcing the US to deregulate], and (c) all the surprising advancements this year, including Opus 4.5, I actually believe these updated timelines.

I don’t know how much to believe campen et al. 2025, but if AGI/ASI is ~8-12 years away, that’s still enough for current MNP levels in human tissue to go up by 1.5x their current values. My intuition is that we’ll still survive the MNPs crisis, but only (barely), only (because) of AI, and only because some high-profile disease/death will be attributed to them that forces some mass switch away from disposable plastics in food packaging.

==

I also can’t believe that just ~5 years ago, Ajrey Cotra’s timelines had a median of ~2050, which were not that far off from Ray Kurzweil’s original timelines, but even then, we saw her timelines as being some sort of “hard upper bound” and that timelines would only get more aggressive from there (which they did). I remember when people often cited that paper, but now they’ve all moved to more modern papers.

4 Likes

https://x.com/i/status/2006340921873297516

IMPORTANT

1 Like

HT BENGIO

https://x.com/i/status/2007538247401124177

An achievement combining the work of people at Google Deepmind and some mathematicians:

https://x.com/A_G_I_Joe/status/2011213692617285729#m

New paper out today, proving a novel theorem in algebraic geometry with an internal math-specialized version of Gemini. This was a collaboration between @GoogleDeepMind (Professor Freddie Manners and @GSalafatinos, hosted by the Blueshift team) and Professors Jim Bryan, Balazs Elek, and Ravi Vakil.

Ravi Vakil, a world-class mathematician at Stanford said:

As someone familiar with the literature, I found that Gemini’s argument was no mere repackaging of existing proofs; it was the kind of insight I would have been proud to produce myself. While I might have eventually reached this conclusion on my own, I cannot say so with certainty.

A Cambridge undergrad wrote a tweet about how he thought the Gemini models used might have just done some elementary “sum switching” and didn’t do any hard algebraic geometry, but then one of the authors of the paper corrected him and he (the Cambridge undergrad) deleted it:

https://xcancel.com/GSalafatinos/status/2011539686389416313#m

No, the model did algebraic geometry work too. The reason much of it isn’t included is because the authors could intuit most of the algebraic geometry just with the small case results and decided to write it by hand rather than precisely copy/verify every detail of the output

So, it seems models are now able to produce “novel ideas” that didn’t appear in the literature elsewhere – not mere repackaging of existing ideas, but truly novel ideas. Indeed, they write in the paper:

It is natural to ask how close the resemblance is between the AI-contributed proofs, and
prior literature that Gemini is likely to have seen in its training data [As run, none of the systems had access to the internet or other search tools.]. Certainly the latter
includes related work such as [1, 3], and it seems likely that being able to build on these
arguments made the problem more tractable for the AI systems than some other research problems. However, the model outputs (such as the one in Appendix C) do not appear to
the authors to be that close to those or (to the best of our knowledge) any other pre-existing
sources. So, absent some future discovery to the contrary, the model’s contribution appears
to involve a genuine combination of synthesis, retrieval, generalization and innovation of
these existing techniques.

1 Like

image
https://x.com/davidad/status/2011845180484133071#m

image

Yeah, that sums it up well in my experience. Incredible breakthroughs from one side, for example the alphafold project, mediocre results from the other, for example using the new knowledge on protein folding to produce new pharmaceutical drugs.

Also, in everyday’s life, I find the results inconsistent. Sometimes the models will totally surprise me, some other days they will disappoint me. But this may be due to congestion and rerouting.

1 Like

davidad changing his mind is huge to me. He has seen a lot - one of the kindest, neurodivergent-friendly, smartest, most well-exposed people ever.

He reminds me of Chris Olah in some ways (lol lol lol) but I feel like he’s way more wholesome.

1 Like

All GLP1’s are dose dependent in how they function :slight_smile: you are not “resistant”

While they appear to be magic, they are basic bioscience :slight_smile:

GLP1 effects on weight are dose dependent.pdf (49.1 KB)

1 Like