Dr. Randazzo described this lack of transparency as the “black box problem,” noting that the decisions produced by deep-learning and machine-learning systems cannot be traced by humans. This opacity makes it challenging for individuals to understand whether and how an AI model has infringed on their rights or dignity, and it prevents them from effectively pursuing justice when such violations occur.
“This is a very significant issue that is only going to get worse without adequate regulation,” Dr. Randazzo said.
“AI is not intelligent in any human sense at all. It is a triumph in engineering, not in cognitive behaviour.
“It has no clue what it’s doing or why – there’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom.”
Read more | SCI TECH DAILY