From the New Scientist:
How worried should you be about an AI apocalypse?
Fears that artificial intelligence could rise up to wipe out humanity are understandable given our steady diet of sci-fi stories depicting just that, but what is the real risk? Matthew Sparkes looks at what the experts say
What we do know for certain is that a lot of very smart people are worried. Many of today’s AI company bosses have warned of the possibility of AI leading to human extinction, and even the pioneer of machine intelligence, Alan Turing, spoke of a future in which computers become sentient, before outstripping our abilities and finally taking over.
The scenario plays out something like this. Imagine we give an AI the sole task of solving a big, meaty problem like the Riemann hypothesis, one of the most famous unsolved problems in mathematics. It could decide that what it needs is lots and lots of computing power and, unconstrained by common sense, set about turning every inanimate object on Earth into one huge supercomputer, leaving 8 billion of us to starve to death in a vast, sterile data centre. It might even use us as raw material, too.
Now, you could argue that in this scenario, we might notice what the AI was doing and give it a quick nudge by saying, “By the way, it looks like you’re turning the whole world into a data centre and, if that’s the case, please stop, because we still need to live on Earth.” But some people might prefer to have safeguards in place to spot this kind of issue before it happens and prevent any harm.
Sci-fi writer Isaac Asimov famously had a crack at this with his three laws of robotics, the first of which is that a robot may not injure a human being or, through inaction, allow a human being to come to harm.
So, in theory, we can just tell AI not to harm us, and it won’t, right? Well, no. Our ability to build safeguards and rules into AI is clumsy and ineffective. We can tell today’s large language models not to be racist, or swear, or divulge the recipe for explosives, but in the right circumstances, they’ll go right ahead and do it anyway. We simply don’t understand what happens inside an AI model well enough to prevent it doing things we don’t want it to do.
Full story: How worried should you be about an AI apocalypse? (New Scientist)