In 2024, Scottish futurist David Wood was part of an informal roundtable discussion at an artificial intelligence (AI) conference in Panama, when the conversation veered to how we can avoid the most disastrous AI futures. His sarcastic answer was far from reassuring.

First, we would need to amass the entire body of AI research ever published, from Alan Turing’s 1950 seminal research paper to the latest preprint studies. Then, he continued, we would need to burn this entire body of work to the ground. To be extra careful, we would need to round up every living AI scientist — and shoot them dead. Only then, Wood said, can we guarantee that we sidestep the “non-zero chance” of disastrous outcomes ushered in with the technological singularity — the “event horizon” moment when AI develops general intelligence that surpasses human intelligence.

Wood, who is himself a researcher in the field, was obviously joking about this “solution” to mitigating the risks of artificial general intelligence (AGI). But buried in his sardonic response was a kernel of truth: The risks a superintelligent AI poses are terrifying to many people because they seem unavoidable. Most scientists predict that AGI will be achieved by 2040 — but some believe it may happen as soon as next year.

Science Spotlight takes a deeper look at emerging science and gives you, our readers, the perspective you need on these advances. Our stories highlight trends in different fields, how new research is changing old ideas, and how the picture of the world we live in is being transformed thanks to science.

So what happens if we assume, as many scientists do, that we have boarded a nonstop train barreling toward an existential crisis?



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here