AGI/ASI Timelines thread (AGI/ASI may solve longevity if it doesn't "kill us all" first)

From the New Scientist:

How worried should you be about an AI apocalypse?

Fears that artificial intelligence could rise up to wipe out humanity are understandable given our steady diet of sci-fi stories depicting just that, but what is the real risk? Matthew Sparkes looks at what the experts say

What we do know for certain is that a lot of very smart people are worried. Many of today’s AI company bosses have warned of the possibility of AI leading to human extinction, and even the pioneer of machine intelligence, Alan Turing, spoke of a future in which computers become sentient, before outstripping our abilities and finally taking over.

The scenario plays out something like this. Imagine we give an AI the sole task of solving a big, meaty problem like the Riemann hypothesis, one of the most famous unsolved problems in mathematics. It could decide that what it needs is lots and lots of computing power and, unconstrained by common sense, set about turning every inanimate object on Earth into one huge supercomputer, leaving 8 billion of us to starve to death in a vast, sterile data centre. It might even use us as raw material, too.

Now, you could argue that in this scenario, we might notice what the AI was doing and give it a quick nudge by saying, “By the way, it looks like you’re turning the whole world into a data centre and, if that’s the case, please stop, because we still need to live on Earth.” But some people might prefer to have safeguards in place to spot this kind of issue before it happens and prevent any harm.

Sci-fi writer Isaac Asimov famously had a crack at this with his three laws of robotics, the first of which is that a robot may not injure a human being or, through inaction, allow a human being to come to harm.

So, in theory, we can just tell AI not to harm us, and it won’t, right? Well, no. Our ability to build safeguards and rules into AI is clumsy and ineffective. We can tell today’s large language models not to be racist, or swear, or divulge the recipe for explosives, but in the right circumstances, they’ll go right ahead and do it anyway. We simply don’t understand what happens inside an AI model well enough to prevent it doing things we don’t want it to do.

Full story: How worried should you be about an AI apocalypse? (New Scientist)

Sam Altman May Control Our Future—Can He Be Trusted?

New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI.

Of all the promises made at OpenAI’s founding, arguably the most central was its pledge to bring A.I. into existence safely. But such concerns are now often derided in Silicon Valley and in Washington. Last year, J. D. Vance, the former venture capitalist who is now the Vice-President, addressed a conference in Paris called the A.I. Action Summit. (It was previously called the A.I. Safety Summit.) “The A.I. future is not going to be won by hand-wringing about safety,” he said. At Davos this year, David Sacks, a venture capitalist who was serving as the White House’s A.I. and crypto czar, dismissed safety concerns as a “self-inflicted injury” that could cost America the A.I. race. Altman now calls Trump’s deregulatory approach “a very refreshing change.”

OpenAI has closed many of its safety-focussed teams. Around the time the superalignment team was dissolved, its leaders, Sutskever and Leike, resigned. (Sutskever co-founded a company called Safe Superintelligence.) On X, Leike wrote, “Safety culture and processes have taken a backseat to shiny products.” Soon afterward, the A.G.I.-readiness team, tasked with preparing society for the shock of advanced A.I., was also dissolved. When the company was asked on its most recent I.R.S. disclosure form to briefly describe its “most significant activities,” the concept of safety, present in its answers to such questions on previous forms, was not listed. (OpenAI said that its “mission did not change” and added, “We continue to invest in and evolve our work on safety, and will continue to make organizational changes.”) The Future of Life Institute, a think tank whose principles on safety Altman once endorsed, grades each major A.I. company on “existential safety”; on the most recent report card, OpenAI got an F. In fairness, so did every other major company except for Anthropic, which got a D, and Google DeepMind, which got a D-.

“My vibes don’t match a lot of the traditional A.I.-safety stuff,” Altman said. He insisted that he continued to prioritize these matters, but when pressed for specifics he was vague: “We still will run safety projects, or at least safety-adjacent projects.” When we asked to interview researchers at the company who were working on existential safety—the kinds of issues that could mean, as Altman once put it, “lights-out for all of us”—an OpenAI representative seemed confused. “What do you mean by ‘existential safety’?” he replied. “That’s not, like, a thing.”

A.I. doomers have been pushed to the fringes, but some of their fears seem less fantastical with each passing month. In 2020, according to a U.N. report, an A.I. drone was used in the Libyan civil war to fire deadly munitions, possibly without oversight by a human operator. Since then, A.I. has only become more central to military operations around the world, including, reportedly, in the current U.S. campaign in Iran. In 2022, researchers at a pharmaceutical company tested whether a drug-discovery model could be used to find new toxins; within a few hours, it had suggested forty thousand deadly chemical-warfare agents. And many more mundane harms are already coming to pass. We increasingly rely on A.I. to help us write, think, and navigate the world, accelerating what experts call “human enfeeblement”; the ubiquity of A.I. “slop” makes life easier for scammers and harder for people who simply want to know what’s real. A.I. “agents” are starting to act independently, with little or no human supervision. Days before the 2024 New Hampshire Democratic primary, thousands of voters received robocalls from an A.I.-generated deepfake of Joe Biden’s voice, telling them to save their votes for November and stay home—an act of voter suppression requiring virtually no technical expertise. OpenAI is now facing seven wrongful-death lawsuits, which allege that ChatGPT prompted several suicides and a murder. Chat logs in the murder case show that it encouraged a man’s paranoid delusion that his eighty-three-year-old mother was surveilling and trying to poison him. Soon afterward, he fatally beat and strangled her and stabbed himself. (OpenAI is fighting the lawsuits, and says that it’s continuing to improve its model’s safeguards.)

Read the full story:

I think a whole lot of age-related damage cannot be fixed by little robots. I think a lot of them are just kind of physically impossible to fix with robots. Some of the damages are literally on the atomic scale. A robot bigger than a bacteria could fix very small damages, but many damages are orders of magnitude smaller than bacteria so the robot would be far to big to fix them. There are physical limits to how small a robot can possibly be built and the smallest robots are bound to be orders of magnitude larger than the smallest damages.

1 Like

Well, I agree. This is why I hedged by saying “huge progress on aging”. Once tiny robots are here, it will solve some of the problems, but many will remain – like DNA damage. Though, I’m guessing DNA damage in some organs could be fixed by tissue replacement; and perhaps tiny robots could help shuttle cells or cell clusters into the right location.

Another use of tiny robots would be highly-localized drug delivery that bypasses the gut and liver. And what about removing tumors cell-by-cell everywhere in the body all at once? (I should ask my brother if he thinks that’s feasible; he’s an oncologist + hematologist.) Could one remove damaged elastin (using locally-deployed chemicals) and replace it with new (chemically-delivered locally)? Remove arterial plaque using atherectomy pretty much everywhere in the body? Serve as an immune system 2.0 where damaged cells are removed, bad bacteria is destroyed? Keep hematopoietic cell population diverse? Help shuttle stem cells to where they are needed?

1 Like