“If we build machines that are way smarter than us and have their own preservation goals, that’s dangerous. It’s like creating a competitor to humanity that is smarter than us,” Yoshua Bengio told the Wall Street Journal.
Because they are trained on human language and behavior, these advanced models could potentially persuade and even manipulate humans to achieve their goals. Yet, AI models’ goals may not always align with human goals, said Bengio.
“Recent experiments show that in some circumstances where the AI has no choice but between its preservation, which means the goals that it was given, and doing something that causes the death of a human, they might choose the death of the human to preserve their goals,” he claimed.
Read more | FORTUNE