The first time I met Eliezer Yudkowsky, he said there was a 99.5 percent chance that A.I. was going to kill me.
I didn’t take it personally. Mr. Yudkowsky, 46, is the founder of the Machine Intelligence Research Institute, a Berkeley-based nonprofit that studies risks from advanced artificial intelligence.
For the last two decades, he has been Silicon Valley’s version of a doomsday preacher — telling anyone who will listen that building powerful A.I. systems is a terrible idea, one that will end in disaster.
That is also the message of Mr. Yudkowsky’s new book, “If Anyone Builds It, Everyone Dies.” The book, co-written with MIRI’s president, Nate Soares, is a distilled, mass-market version of the case they have been making to A.I. insiders for years.
Their goal is to stop the development of A.I. — and the stakes, they say, are existential.
“If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of A.I., then everyone, everywhere on Earth, will die,” they write.
Read more | NEW YORK TIMES