We Need Conscious Robots

Kanai_BRH. Armstrong Roberts / ClassicStock / Getty Images

How introspection and imagination make robots better.

People often ask me whether human-level artificial intelligence will eventually become conscious. My response is: Do you want it to be conscious? I think it is largely up to us whether our machines will wake up.

That may sound presumptuous. The mechanisms of consciousness—the reasons we have a vivid and direct experience of the world and of the self—are an unsolved mystery in neuroscience, and some people think they always will be; it seems impossible to explain subjective experience using the objective methods of science. But in the 25 or so years that we’ve taken consciousness seriously as a target of scientific scrutiny, we have made significant progress. We have discovered neural activity that correlates with consciousness, and we have a better idea of what behavioral tasks require conscious awareness. Our brains perform many high-level cognitive tasks subconsciously.

Consciousness, we can tentatively conclude, is not a necessary byproduct of our cognition. The same is presumably true of AIs. In many science-fiction stories, machines develop an inner mental life automatically, simply by virtue of their sophistication, but it is likelier that consciousness will have to be expressly designed into them.

And we have solid scientific and engineering reasons to try to do that. Our very ignorance about consciousness is one. The engineers of the 18th and 19th centuries did not wait until physicists had sorted out the laws of thermodynamics before they built steam engines. It worked the other way round: Inventions drove theory. So it is today. Debates on consciousness are often too philosophical and spin around in circles without producing tangible results. The small community of us who work on artificial consciousness aims to learn by doing.

Furthermore, consciousness must have some important function for us, or else evolution wouldn’t have endowed us with it. The same function would be of use to AIs. Here, too, science fiction might have misled us. For the AIs in books and TV shows, consciousness is a curse. They exhibit unpredictable, intentional behaviors, and things don’t turn out well for the humans. But in the real world, dystopian scenarios seem unlikely. Whatever risks AIs may pose do not depend on their being conscious. To the contrary, conscious machines could help us manage the impact of AI technology. I would much rather share the world with them than with thoughtless automatons.

When AlphaGo was playing against the human Go champion, Lee Sedol, many experts wondered why AlphaGo played the way it did. They wanted some explanation, some understanding of AlphaGo’s motives and rationales. Such situations are common for modern AIs, because their decisions are not preprogrammed by humans, but are emergent properties of the learning algorithms and the data set they are trained on. Their inscrutability has created concerns about unfair and arbitrary decisions. Already there have been cases of discrimination by algorithms; for instance, a Propublica investigation last year found that an algorithm used by judges and parole officers in Florida flagged black defendants as more prone to recidivism than they actually were, and white defendants as less prone than they actually were.

Beginning next year, the European Union will give its residents a legal “right to explanation.” People will be able to demand an accounting of why an AI system made the decision it did. This new requirement is technologically demanding. At the moment, given the complexity of contemporary neural networks, we have trouble discerning how AIs produce decisions, much less translating the process into a language humans can make sense of.

In the real world, dystopian scenarios seem unlikely.

If we can’t figure out why AIs do what they do, why don’t we ask them? We can endow them with metacognition—an introspective ability to report their internal mental states. Such an ability is one of the main functions of consciousness. It is what neuroscientists look for when they test whether humans or animals have conscious awareness. For instance, a basic form of metacognition, confidence, scales with the clarity of conscious experience. When our brain processes information without our noticing, we feel uncertain about that information, whereas when we are conscious of a stimulus, the experience is accompanied by high confidence: “I definitely saw red!”…

more…

http://nautil.us/issue/47/consciousness/we-need-conscious-robots

WIKK WEB GURU
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s