AGI/ASI Timelines thread (AGI/ASI may solve longevity if it doesn't "kill us all" first)

From the New Scientist:

How worried should you be about an AI apocalypse?

Fears that artificial intelligence could rise up to wipe out humanity are understandable given our steady diet of sci-fi stories depicting just that, but what is the real risk? Matthew Sparkes looks at what the experts say

What we do know for certain is that a lot of very smart people are worried. Many of today’s AI company bosses have warned of the possibility of AI leading to human extinction, and even the pioneer of machine intelligence, Alan Turing, spoke of a future in which computers become sentient, before outstripping our abilities and finally taking over.

The scenario plays out something like this. Imagine we give an AI the sole task of solving a big, meaty problem like the Riemann hypothesis, one of the most famous unsolved problems in mathematics. It could decide that what it needs is lots and lots of computing power and, unconstrained by common sense, set about turning every inanimate object on Earth into one huge supercomputer, leaving 8 billion of us to starve to death in a vast, sterile data centre. It might even use us as raw material, too.

Now, you could argue that in this scenario, we might notice what the AI was doing and give it a quick nudge by saying, “By the way, it looks like you’re turning the whole world into a data centre and, if that’s the case, please stop, because we still need to live on Earth.” But some people might prefer to have safeguards in place to spot this kind of issue before it happens and prevent any harm.

Sci-fi writer Isaac Asimov famously had a crack at this with his three laws of robotics, the first of which is that a robot may not injure a human being or, through inaction, allow a human being to come to harm.

So, in theory, we can just tell AI not to harm us, and it won’t, right? Well, no. Our ability to build safeguards and rules into AI is clumsy and ineffective. We can tell today’s large language models not to be racist, or swear, or divulge the recipe for explosives, but in the right circumstances, they’ll go right ahead and do it anyway. We simply don’t understand what happens inside an AI model well enough to prevent it doing things we don’t want it to do.

Full story: How worried should you be about an AI apocalypse? (New Scientist)

Sam Altman May Control Our Future—Can He Be Trusted?

New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI.

Of all the promises made at OpenAI’s founding, arguably the most central was its pledge to bring A.I. into existence safely. But such concerns are now often derided in Silicon Valley and in Washington. Last year, J. D. Vance, the former venture capitalist who is now the Vice-President, addressed a conference in Paris called the A.I. Action Summit. (It was previously called the A.I. Safety Summit.) “The A.I. future is not going to be won by hand-wringing about safety,” he said. At Davos this year, David Sacks, a venture capitalist who was serving as the White House’s A.I. and crypto czar, dismissed safety concerns as a “self-inflicted injury” that could cost America the A.I. race. Altman now calls Trump’s deregulatory approach “a very refreshing change.”

OpenAI has closed many of its safety-focussed teams. Around the time the superalignment team was dissolved, its leaders, Sutskever and Leike, resigned. (Sutskever co-founded a company called Safe Superintelligence.) On X, Leike wrote, “Safety culture and processes have taken a backseat to shiny products.” Soon afterward, the A.G.I.-readiness team, tasked with preparing society for the shock of advanced A.I., was also dissolved. When the company was asked on its most recent I.R.S. disclosure form to briefly describe its “most significant activities,” the concept of safety, present in its answers to such questions on previous forms, was not listed. (OpenAI said that its “mission did not change” and added, “We continue to invest in and evolve our work on safety, and will continue to make organizational changes.”) The Future of Life Institute, a think tank whose principles on safety Altman once endorsed, grades each major A.I. company on “existential safety”; on the most recent report card, OpenAI got an F. In fairness, so did every other major company except for Anthropic, which got a D, and Google DeepMind, which got a D-.

“My vibes don’t match a lot of the traditional A.I.-safety stuff,” Altman said. He insisted that he continued to prioritize these matters, but when pressed for specifics he was vague: “We still will run safety projects, or at least safety-adjacent projects.” When we asked to interview researchers at the company who were working on existential safety—the kinds of issues that could mean, as Altman once put it, “lights-out for all of us”—an OpenAI representative seemed confused. “What do you mean by ‘existential safety’?” he replied. “That’s not, like, a thing.”

A.I. doomers have been pushed to the fringes, but some of their fears seem less fantastical with each passing month. In 2020, according to a U.N. report, an A.I. drone was used in the Libyan civil war to fire deadly munitions, possibly without oversight by a human operator. Since then, A.I. has only become more central to military operations around the world, including, reportedly, in the current U.S. campaign in Iran. In 2022, researchers at a pharmaceutical company tested whether a drug-discovery model could be used to find new toxins; within a few hours, it had suggested forty thousand deadly chemical-warfare agents. And many more mundane harms are already coming to pass. We increasingly rely on A.I. to help us write, think, and navigate the world, accelerating what experts call “human enfeeblement”; the ubiquity of A.I. “slop” makes life easier for scammers and harder for people who simply want to know what’s real. A.I. “agents” are starting to act independently, with little or no human supervision. Days before the 2024 New Hampshire Democratic primary, thousands of voters received robocalls from an A.I.-generated deepfake of Joe Biden’s voice, telling them to save their votes for November and stay home—an act of voter suppression requiring virtually no technical expertise. OpenAI is now facing seven wrongful-death lawsuits, which allege that ChatGPT prompted several suicides and a murder. Chat logs in the murder case show that it encouraged a man’s paranoid delusion that his eighty-three-year-old mother was surveilling and trying to poison him. Soon afterward, he fatally beat and strangled her and stabbed himself. (OpenAI is fighting the lawsuits, and says that it’s continuing to improve its model’s safeguards.)

Read the full story:

I think a whole lot of age-related damage cannot be fixed by little robots. I think a lot of them are just kind of physically impossible to fix with robots. Some of the damages are literally on the atomic scale. A robot bigger than a bacteria could fix very small damages, but many damages are orders of magnitude smaller than bacteria so the robot would be far to big to fix them. There are physical limits to how small a robot can possibly be built and the smallest robots are bound to be orders of magnitude larger than the smallest damages.

1 Like

Well, I agree. This is why I hedged by saying “huge progress on aging”. Once tiny robots are here, it will solve some of the problems, but many will remain – like DNA damage. Though, I’m guessing DNA damage in some organs could be fixed by tissue replacement; and perhaps tiny robots could help shuttle cells or cell clusters into the right location.

Another use of tiny robots would be highly-localized drug delivery that bypasses the gut and liver. And what about removing tumors cell-by-cell everywhere in the body all at once? (I should ask my brother if he thinks that’s feasible; he’s an oncologist + hematologist.) Could one remove damaged elastin (using locally-deployed chemicals) and replace it with new (chemically-delivered locally)? Remove arterial plaque using atherectomy pretty much everywhere in the body? Serve as an immune system 2.0 where damaged cells are removed, bad bacteria is destroyed? Keep hematopoietic cell population diverse? Help shuttle stem cells to where they are needed?

1 Like

We Are Witnessing the Rise of a New Aristocracy

Inequality is such a fact of American life that it’s easy to shrug off. But we are in uncharted terrain. The amassed wealth of today’s tech titans makes the Rockefellers and the Vanderbilts look quaint. Over the past two years, 19 households have added $1.8 trillion to their coffers, the economist Gabriel Zucman told me — roughly the size of the economy of Australia.

Into this fragile state enters artificial intelligence. It threatens to make a bad situation much worse.

Left on its current course, A.I. could deliver a bleak picture: lower- and middle-income jobs automated away, with top earners remaining unscathed. Income shifting from middle-wage workers doing the bulk of the labor toward those wealthy enough to bankroll the technology. Growth headwinds. Worsening affordability. So, too, a federal government less able to respond, thanks to a shrinking tax base.

For any society in which this much wealth gets concentrated in so few hands, and is then so easily parlayed into political clout, the question becomes one not just of economics but of basic civic standing. At some point soon, we are no longer sharing in self-government.

Start with A.I.’s impact on jobs. ​​Technologists are convinced that a labor apocalypse is nigh. In this story, A.I. is sometimes posited as a great equalizer, gutting white-collar jobs and salaries, giving more clout to trades like plumbing and dimming the luster of that Ivy League degree. The theory has gotten the nod from academics, industry associations and institutions such as the O.E.C.D.

Read the full article: We Are Witnessing the Rise of a New Aristocracy (NYT)

Industrial Policyfor the Intelligence Age: Ideas to Keep People First, April 2026 (OpenAI)

It would seem that AI data centers may be weak links in the global infrastructure… and why exactly did anyone think it was a good idea to put a major data center in a war zone (which the Middle East has been for decades)?

1 Like

Elon wants them in space…probably safer than the middle east. Iowa is pretty safe but I admit I don’t want one here. Power bills will go up and somebody might bomb it. I wouldn’t complain, just would prefer somebody else do it.

I have a pretty tenuous grip on why we need such a level of these things. I watch videos all the time of amazing things going on improving speed, reduced power consumtion etc… Why don’t they just pile up money for a while and spend on the better stuff later?