[ futurism.com ] Should We Give AIs the Same Rights as Humans?

[ mukeshbalani.com ] “You heard it here first…if you haven’t already heard it elsewhere”…

Should We Give AIs the Same Rights as Humans?

Evolution of AI

Artificial Intelligence (AI) is developing at a staggering pace. AIs have the potential to exceed human intelligence within the next 45 years, have recently been shown to be adept at reading and reacting to emotions, and can exceed five of the world’s best players humans at one of our most highbrow games.

In short, AI is becoming more and more human, and will soon likely be indistinguishable from us. Several experts have voiced opinions about what may, or may not, constitute differences between humanity and its intelligent creations.

James Hughes, the executive director of the Institute for Ethics and Emerging Technologies, told Gizmodo that he believes the key characteristic is self-awareness (recognition of consciousness-which he defines as awareness of one’s body and environment). As soon as a being achieves this, it becomes a person.

Hughes also makes an interesting link to histories of oppression, and suggests that discriminating against mechanics may be similar to persecuting people for their gender or skin color. “Our Enlightenment values oblige us to look to these truly important rights-bearing characteristics, regardless of species, and set aside pre-Enlightenment restrictions on rights-bearing to only humans or Europeans or men,” Hughes said in the Gizmodo interview.

Types of AI: From Reactive to Self-Aware [INFOGRAPHIC]
Click to View Full Infographic

Wesley Smith, a senior fellow at the Discovery Institute’s Center of Human Exceptionalism, holds the opposite view — that machines should never receive personhood because they are machines. This is known in philosophy as an intrinsic value (based on the thing in an of itself), rather than Hughes’ extrinsic value (based on the thing’s properties).

“Even the most sophisticated machine is just a machine,” Smith said in a Gizmodo interview. “It is not a living being. It is not an organism. It would be only the sum of its programming, whether done by a human, another computer, or if it becomes self-programming.”

Others have proposed developing a way to “measure” consciousness. An example is Giulio Tononi’s Integrated Information Theory of Consciousness, which ascertains whether a particular system “has consciousness, how much, and of which kind,” and gives “explanation for empirical evidence, testable predictions, and permits inferences.” Although this model still has huge issues, it at the very least gives a common denominator for comparison.

The Consequences of a Definition

How to treat beings that are made of different materials but are similarly sentient as us is a key question — as this will determine how AI is integrated into our society. In order to avoid committing a moral atrocity, we must begin to think about these questions now.

But just because we must do something does not mean it is easy to do. Ed Boyden, a neuroscientist at the Synthetic Neurobiology Group and an associate professor at MIT Media Lab, said in a Gizmodo interview, “I don’t think we have an operational definition of consciousness, in the sense that we can directly measure it or create it.”

Boyden eludes to the fact we haven’t even really come up with a suitable definition for ourselves. The question of personhood has stumped philosophers (including, among others, Simon Blackburn, David Hume, and Mary Anne Warren) for millennia because any definition leads us into ethical black holes extremely quickly.

AI’s personhood status not only has societal ramifications, but also political, moral, and philosophical ones. Our decision could lead to a Matrix-esque nightmare where we are ruled by AI, or a Her-like state of affairs, in which humans and machines live co-dependently and peacefully. The definition is also vital because it will determine how robots treat us.

In order to give robots a command about how to react to “humans,” a definition of human must be coded into AI programming. People have often rested on Asimov’s three laws, but his literature was based on problems robots had with these laws, and in particular what constitutes “human” and “harm.”

While this is an extremely difficult field, it is encouraging that experts are thinking about these questions, because as AI technology continues its rapid advancement, we will need answers sooner rather than later.

The post Should We Give AIs the Same Rights as Humans? appeared first on Futurism.

http://ift.tt/2rDXnHy

Advertisements