By Max Borders
10 Questions About Conscious Machines
How will we handle the rights of AI?
For thousands of years, people have watched the skies and wondered if our species is alone in the universe — and what it would mean if we’re not. But lately, we have reason to think that we will build the answer long before we get a call from the stars. What happens to our institutions when the machines wake up? Max Borders raises ten questions we’ll need to answer. -Ed.
In the past year or so, there have been a lot of films about artificial intelligence: Her, Chappie, and now there’s Ex Machina.
These films are good for us.
They stretch our thinking. They prompt us to ask serious questions about the possibility and prospects of conscious machines — the answers to which may be needed if we must someday co-exist with newly-sentient beings. Some of them may sound far out, but they force us to think critically about important first principles.
Ten come to mind.
- Can conscious awareness arise from causal-physical stuff — like that assembled (or grown) in a laboratory — to make a sentient being?
- If such beings become conscious, aware, and have volition, does that mean they could experience pain, pleasure, and emotion too?
- If these beings have human-like emotions, as well as volition, does that mean they are owed humane and ethical treatment?
- If these beings ought to be treated humanely and ethically, does that also confer certain rights upon them — and are they equal to the rights that humans have come to expect from each other? Does the comparison even make sense?
- If these beings have rights, is it wrong to program them for the specific task of serving us? What if they derive pleasure from serving us, or are programmed to do so?
- If these beings have rights by virtue of their consciousness and volition, does that offer the philosophical basis of rights in general?
- If these beings do not have rights people need respect, could anything at all grant rights to them?
- If these beings have nothing that grants them ethical treatment or rights, what makes humans distinct in this respect?
- If we were able to combine human intelligence with AI — a hybrid, if you will, in which the brain was a mix of biological material and sophisticated circuitry — what would be the ethical/legal status of this being?
- If it turns out that humans are not distinct in any meaningful sense from robots, at least in terms of justifying rights, does that mean that rights are a social construct?
These questions might make some people uncomfortable. They should. I merely raise them; I do not purport to answer them here.
I invite your invective and your reasoned, tentative answers in the comments.
This article was originally posted at Anything Peaceful.