In Silicon Valley, there’s a small but growing field called model welfare, which is working to figure out whether AI models are conscious and deserving of moral considerations, such as legal rights. Within the past year, two research organizations studying model welfare have popped up: Conscium and Eleos AI Research. Anthropic also hired its first AI welfare researcher last year.
Earlier this month, Anthropic said it gave its Claude chatbot the ability to terminate “persistently harmful or abusive user interactions” that could be “potentially distressing.”
“We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future,” Anthropic said in a blog post. “However, we take the issue seriously, and alongside our research program we’re working to identify and implement low-cost interventions to mitigate risks to model welfare.”
While worrying about the well-being of artificial intelligence may seem ridiculous to some people, it’s not a new idea.
More than half a century ago, American mathematician and philosopher Hilary Putnam was posing questions like, “Should robots have civil rights?”
Read more | WIRED