Thinking about the end of the world can be fun. Although realistic doomsday scenarios—nuclear war, global warming, autocracy—are stressful to contemplate, more fanciful apocalypses (an alien invasion, a robot uprising) can generate some enjoyable escapism.
This is probably one of the reasons that, when generative artificial intelligence burst into public consciousness three years ago with the launch of ChatGPT, so many responses focused on the “existential risk” posed by hypothetical future AI systems, rather than the much more immediate and well-founded concerns about the dangers of thoughtlessly deployed technology.
But long before the AI bubble arrived, some people were banging on about the possibility of it killing us all.
Chief among them was Eliezer Yudkowsky, a co-founder of the Machine Intelligence Research Institute (MIRI). For more than 20 years, Yudkowsky has been warning about the dangers posed by “superintelligent” AI, machines able to out-think and out-plan all of humanity.
The title of Yudkowsky’s new book on the subject, co-written with Nate Soares, the president of MIRI, sets the tone from the start. If Anyone Builds It, Everyone Dies is their attempt to make a succinct case for AI doom.
It is also tendentious and rambling, simultaneously condescending and shallow.
Read more | THE ATLANTIC