Dr. Martin Peterson, a philosophy professor at Texas A&M University, says that while AI can mimic human decision-making, it cannot truly make moral choices.
AI cannot, by itself, be a “moral agent” with an understanding of the difference between right and wrong and is held accountable for its actions, he said.
“AI can produce the same decisions and recommendations that humans would produce,” he said, “but the causal history of those decisions differs in important ways.” Unlike humans, AI lacks free will and cannot be held morally responsible. If an AI system causes harm, the blame lies with its developers or users, not the technology itself.
Read more | PHYS.org

