Raising good robots

Resultado de imagem para Gael Rougegrez of the Blanca Li Dance Company performs ‘Robot’, 22 February 2017 in London, England. Photo by Ian Gavan/Getty

Image edited by Web Investigator- Gael Rougegrez of the Blanca Li Dance Company performs ‘Robot’, 22 February 2017 in London, England. Photo by Ian Gavan/Getty

We already have a way to teach morals to alien intelligences: it’s called parenting. Can we apply the same methods to robots?

Regina Rini is an assistant professor and faculty fellow at the New York University Center for Bioethics, and an affiliate faculty member in the Medical Ethics division of the NYU Department of Population Health.

 

Intelligent machines, long promised and never delivered, are finally on the horizon. Sufficiently intelligent robots will be able to operate autonomously from human control. They will be able to make genuine choices. And if a robot can make choices, there is a real question about whether it will make moral choices. But what is moral for a robot? Is this the same as what’s moral for a human?

Philosophers and computer scientists alike tend to focus on the difficulty of implementing subtle human morality in literal-minded machines. But there’s another problem, one that really ought to come first. It’s the question of whether we ought to try to impose our own morality on intelligent machines at all. In fact, I’d argue that doing so is likely to be counterproductive, and even unethical. The real problem of robot morality is not the robots, but us. Can we handle sharing the world with a new type of moral creature?

We like to imagine that artificial intelligence (AI) will be similar to humans, because we are the only advanced intelligence we know. But we are probably wrong. If and when AI appears, it will probably be quite unlike us. It might not reason the way we do, and we could have difficulty understanding its choices.

In 2016, a computer program challenged Lee Sedol, humanity’s leading player of the ancient game of Go. The program, a Google project called AlphaGo, is an early example of what AI might be like. In the second game of the match, AlphaGo made a move – ‘Move 37’ – that stunned expert commenters. Some thought it was a mistake. Lee, the human opponent, stood up from the table and left the room. No one quite knew what AlphaGo was doing; this was a tactic that expert human players simply did not use. But it worked. AlphaGo won that match, as it had the game before and the next game. In the end, Lee won only a single game out of five.

AlphaGo is very, very good at Go, but it is not good in the same way that humans are. Not even its creators can explain how it settles on its strategy in each game. Imagine that you could talk to AlphaGo and ask why it made Move 37. Would it be able to explain the choice to you – or to human Go experts? Perhaps. Artificial minds needn’t work as ours do to accomplish similar tasks.

In fact, we might discover that intelligent machines think about everything, not just Go, in ways that are alien us. You don’t have to imagine some horrible science-fiction scenario, where robots go on a murderous rampage. It might be something more like this: imagine that robots show moral concern for humans, and robots, and most animals… and also sofas. They are very careful not to damage sofas, just as we’re careful not to damage babies. We might ask the machines: why are you so worried about sofas? And their explanation might not make sense to us, just as AlphaGo’s explanation of Move 37 might not make sense.

This line of thinking takes us to the heart of a very old philosophical puzzle about the nature of morality. Is it something above and beyond human experience, something that applies to anyone or anything that could make choices – or is morality a distinctly human creation, something specially adapted to our particular existence?

Long before robots, the ancient Greeks had to grapple with the morality of a different kind of alien mind: the teenager. The Greeks worried endlessly about how to cultivate morality in their youth. Plato thought that our human concept of justice, like all human concepts, was a pale reflection of some perfect form of Justice. He believed that we have an innate acquaintance with these forms, but that we understand them only dimly as children. Perhaps we will encounter pure Justice after death, but the task of philosophy is to try to reason our way back to these truths while we are still living…

more…

https://aeon.co/essays/creating-robots-capable-of-moral-reasoning-is-like-parenting

WIKK WEB GURU

Has humanity already lost control of artificial intelligence? Scientists admit that computers are learning too quickly for humans to keep up

Scientists are concerned that computers are already overtaking us in their abilities, raising the prospect that we could lose control of them altogether. Pictured is the Terminator film, in which robots take over - a prospect that could soon become a reality

 Scientists are concerned that computers are already overtaking us in their abilities, raising the prospect that we could lose control of them altogether. Pictured is the Terminator film, in which robots take over – a prospect that could soon become a reality
  • Last year, scientists made a driverless car that learned by watching humans
  • But even the creators of the car did not understand how it learned this way
  • In another study, a computer could pinpoint people with schizophrenia
  • Again, its creators were unsure how it was able to do this 

From driving cars to beating chess masters at their own game, computers are already performing incredible feats.

And artificial intelligence is quickly advancing, allowing computers to learn from experience without the need for human input.

But scientists are concerned that computers are already overtaking us in their abilities, raising the prospect that we could lose control of them altogether.

ROBOT TAKEOVER

A recent report by PwC found that four in 10 jobs are at risk of being replaced by robots.

The report also found that 38 per cent of US jobs will be replaced by robots and artificial intelligence by the early 2030s.

The analysis revealed that 61 per cent of financial services jobs are at risk of a robot takeover.

This is compared to 30 per cent of UK jobs, 35 per cent of Germany and 21 per cent in Japan.

Last year, a driverless car took to the streets of New Jersey, which ran without any human intervention.

The car, created by Nvidia, could make its own decisions after watching how humans learned how to drive.

But despite creating the car, Nvidia admitted that it wasn’t sure how the car was able to learn in this way, according to MIT Technology Review.

The car’s underlying technology was ‘deep learning’ – a powerful tool based on the neural layout of the human brain.

Deep learning is used in a range of technologies, including tagging your friends on social media, and allowing Siri to answer questions.

The system is also being used by the military, which hopes to use deep learning to steer ships, destroy targets and control deadly drones.

There is also hope that deep learning could be used in medicine to diagnose rare diseases.

But if its creators lose control of the system, we’re in big trouble, experts claim…

Read more: http://www.dailymail.co.uk/sciencetech/article-4401836/Has-humanity-lost-control-artificial-intelligence.html#ixzz4e2FVAjWz
Follow us: @MailOnline on Twitter | DailyMail on Facebook

WIKK WEB GURU

Intelligence: a history

Resultado de imagem para Intelligent assumptions? At the Oxford Union, 1950.

Intelligent assumptions? At the Oxford Union, 1950. From the Picture Post feature, Eternal Oxford. Photo by John Chillingworth/Getty

Intelligence has always been used as fig-leaf to justify domination and destruction. No wonder we fear super-smart robots

Stephen Cave is executive director and senior research fellow of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. A philosopher by training, he has also served as a British diplomat, and written widely on philosophical and scientific subjects, including for The New York Times, The Atlantic, Guardian and others.

As I was growing up in England in the latter half of the 20th century, the concept of intelligence loomed large. It was aspired to, debated and – most important of all – measured. At the age of 11, tens of thousands of us all around the country were ushered into desk-lined halls to take an IQ test known as the 11-Plus. The results of those few short hours would determine who would go to grammar school, to be prepared for university and the professions; who was destined for technical school and thence skilled work; and who would head to secondary modern school, to be drilled in the basics then sent out to a life of low-status manual labour.

The idea that intelligence could be quantified, like blood pressure or shoe size, was barely a century old when I took the test that would decide my place in the world. But the notion that intelligence could determine one’s station in life was already much older. It runs like a red thread through Western thought, from the philosophy of Plato to the policies of UK prime minister Theresa May. To say that someone is or is not intelligent has never been merely a comment on their mental faculties. It is always also a judgment on what they are permitted to do. Intelligence, in other words, is political.

Sometimes, this sort of ranking is sensible: we want doctors, engineers and rulers who are not stupid. But it has a dark side. As well as determining what a person can do, their intelligence – or putative lack of it – has been used to decide what others can do to them. Throughout Western history, those deemed less intelligent have, as a consequence of that judgment, been colonised, enslaved, sterilised and murdered (and indeed eaten, if we include non-human animals in our reckoning).

It’s an old, indeed an ancient, story. But the problem has taken an interesting 21st-century twist with the rise of Artificial Intelligence (AI). In recent years, the progress being made in AI research has picked up significantly, and many experts believe that these breakthroughs will soon lead to more. Pundits are by turn terrified and excited, sprinkling their Twitter feeds with Terminator references. To understand why we care and what we fear, we must understand intelligence as a political concept – and, in particular, its long history as a rationale for domination.

The term ‘intelligence’ itself has never been popular with English-language philosophers. Nor does it have a direct translation into German or ancient Greek, two of the other great languages in the Western philosophical tradition. But that doesn’t mean philosophers weren’t interested in it. Indeed, they were obsessed with it, or more precisely a part of it: reason or rationality. The term ‘intelligence’ managed to eclipse its more old-fashioned relative in popular and political discourse only with the rise of the relatively new-fangled discipline of psychology, which claimed intelligence for itself. Although today many scholars advocate a much broader understanding of intelligence, reason remains a core part of it. So when I talk about the role that intelligence has played historically, I mean to include this forebear.

The story of intelligence begins with Plato. In all his writings, he ascribes a very high value to thinking, declaring (through the mouth of Socrates) that the unexamined life is not worth living. Plato emerged from a world steeped in myth and mysticism to claim something new: that the truth about reality could be established through reason, or what we might consider today to be the application of intelligence. This led him to conclude, in The Republic, that the ideal ruler is ‘the philosopher king’, as only a philosopher can work out the proper order of things. And so he launched the idea that the cleverest should rule over the rest – an intellectual meritocracy.

This idea was revolutionary at the time. Athens had already experimented with democracy, the rule of the people – but to count as one of those ‘people’ you just had to be a male citizen, not necessarily intelligent. Elsewhere, the governing classes were made up of inherited elites (aristocracy), or by those who believed they had received divine instruction (theocracy), or simply by the strongest (tyranny)…

more…

https://aeon.co/essays/on-the-dark-history-of-intelligence-as-domination

WIKK WEB GURU
WIKK WEB GURU

Company works to use ‘computer vision’ to help the visually impaired see

Image: Company works to use ‘computer vision’ to help the visually impaired see

 by:

(NaturalNews) A game-changing technological innovation for the blind has been developed by a tech startup, Eyra. The wearable assistant, Horus, consists of a headset with cameras and a pocket processor with battery. Horus utilizes the same technology that enables auto-drive cars and drones to navigate. Here we share good news regarding the application of artificial intelligence, versus warnings of cyborg soldiers and job-stealing robots.

From Eyra’s website Horus.tech, “Horus is a wearable device that observes, understands and describes the environment to the person using it, providing useful information with the right timing and in a discreet way using bone conduction. Horus is able to read texts, to recognize faces, objects and much more.

“Thanks to the latest advances in artificial intelligence, Horus is able to describe what the cameras are seeing. Whether it is a postcard, a photograph or a landscape, the device provides a short description of what is in front of it.”

Here are some details about the mechanisms that bring ‘sight’ to the blind:

Text recognition

Horus can recognize and read aloud printed texts, including on curved surfaces. When Horus acquires the targeted text, it will begin to recite, and at that point it is not required that the camera remain directed at the text. Horus also gives audible cues to the user to keep the text properly framed.

Face recognition

Utilizing facial feature metrics, Horus can learn an unknown face within seconds and add that person into its database, upon spoken request. After a face is learned, and upon subsequently detecting that face, Horus will at once notify the user.

Object recognition

If the user simply rotates the item in front of the cameras, Horus can perceive an object’s appearance and shape in three dimensions. Since Horus can identify an object from various angles, it can help the user recognize similarly shaped objects. As with text recognition, if needed, Horus will prompt the user to move the object into the cameras’ view.

Mobility assistance

When moving along a path, the user will be warned by Horus of any obstacles, via an alert sound. Its pitch, intensity, 3D positioning, and frequency of repetition will differ, depending on the object’s location and distance.

Tech website Engadget states:

“The startup was created by a pair of students from the University of Genoa who were looking to develop a computer vision system. While their research was centered around enabling robots to navigate, they found the technology had other applications. In the subsequent two years, they’ve been working on producing a portable version of the gear, and think that they’re getting close to completion. In the future, the device is also expected to offer up scene description that’ll offer users a greater ability to ‘see.’

“Should the pair secure the necessary funding, Horus will be released at some point in the near future, although it’ll be pretty pricey. The creators feel like the device will retail for something between €1,500 and €2,000 Euro, although if it can deliver on its promise, it may be money well spent.”

A bright future

Today’s world is in some ways negatively impacted by technology, as with ever-encroaching police state technocracy and breaches of privacy. But in many ways our lives are enhanced beyond any historical comparison. For example, the average working class person in the developed world, even those below the poverty level, has a higher standard of living than many kings of eras past. A word to the wise: be wary of present dangers to our freedoms and independence, but be ever hopeful of a better future, thanks to humanity’s propensity for technological advances that will make our lives vastly more enriched and livable.

Sources:

Horus.tech

Engadget.com

http://www.naturalnews.com/2017-01-01-company-works-to-use-computer-vision-to-help-the-blind-see.html

WIKK WEB GURU
WIKK WEB GURU

THE DARK SIDE OF VR

Animation: Scott Gelber for The Intercept

Virtual Reality Allows the Most Detailed, Intimate Digital Surveillance Yet

by 

HY DO I look like Justin Timberlake?”

Facebook CEO Mark Zuckerberg was on stage wearing a virtual reality headset, feigning surprise at an expressive cartoon simulacrum that seemed to perfectly follow his every gesture.

The audience laughed. Zuckerberg was in the middle of what he described as the first live demo inside VR, manipulating his digital avatar to show off the new social features of the Rift headset from Facebook subsidiary Oculus. The venue was an Oculus developer conference convened earlier this fall in San Jose. Moments later, Zuckerberg and two Oculus employees were transported to his glass-enclosed office at Facebook, and then to his infamously sequestered home in Palo Alto. Using the Rift and its newly revealed Touch hand controllers, their avatars gestured and emoted in real time, waving to Zuckerberg’s Puli sheepdog, dynamically changing facial expressions to match their owner’s voice, and taking photos with a virtual selfie stick — to post on Facebook, of course.

The demo encapsulated Facebook’s utopian vision for social VR, first hinted at two years ago when the company acquired Oculus and its crowd-funded Rift headset for $2 billion. And just as in 2014, Zuckerberg confidently declared that VR would be “the next major computing platform,” changing the way we connect, work, and socialize.

“Avatars are going to form the foundation of your identity in VR,” said Oculus platform product manager Lauren Vegter after the demo. “This is the very first time that technology has made this level of presence possible.”

But as the tech industry continues to build VR’s social future, the very systems that enable immersive experiences are already establishing new forms of shockingly intimate surveillance. Once they are in place, researchers warn, the psychological aspects of digital embodiment — combined with the troves of data that consumer VR products can freely mine from our bodies, like head movements and facial expressions — will give corporations and governments unprecedented insight and power over our emotions and physical behavior.

VIRTUAL REALITY AS a medium is still in its infancy, but the kinds of behaviors it captures have long been a holy grail for marketers and data-monetizing companies like Facebook. Using cookies, beacons, and other ubiquitous tracking code, online advertisers already record the habits of web surfers using a wide range of metrics, from what sites they visit to how long they spend scrolling, highlighting, or hovering over certain parts of a page. Data behemoths like Google also scan emails and private chats for any information that might help “personalize” a user’s web experience — most importantly, by targeting the user with ads.

But those metrics are primitive compared to the rich portraits of physical user behavior that can be constructed using data harvested from immersive environments, using surveillance sensors and techniques that have already been controversially deployed in the real world.

“The information that current marketers can use in order to generate targeted advertising is limited to the input devices that we use: keyboard, mouse, touch screen,” says Michael Madary, a researcher at Johannes Gutenberg University who co-authored the first VR code of ethics with Thomas Metzinger earlier this year. “VR analytics offers a way to capture much more information about the interests and habits of users, information that may reveal a great deal more about what is going on in [their] minds.”

The value of collecting physiological and behavioral data is all too obvious for Silicon Valley firms like Facebook, whose data scientists in 2012 conducted an infamous study titled “Experimental evidence of massive-scale emotional contagion through social networks,” in which they secretly modified users’ news feeds to include positive or negative content and thus affected the emotional state of their posts. As one chief data scientist at an unnamed Silicon Valley company told Harvard business professor Shoshanna Zuboff: “The goal of everything we do is to change people’s actual behavior at scale. … We can capture their behaviors, identify good and bad behaviors, and develop ways to reward the good and punish the bad.”…

more…

https://theintercept.com/2016/12/23/virtual-reality-allows-the-most-detailed-intimate-digital-surveillance-yet/

WIKK WEB GURU
WIKK WEB GURU

 

The Dharma of Westworld

The Dharma of <i>Westworld</i>James Marsden and Evan Rachel Wood Credit: John P. Johnson | HBO

Reincarnation, no-self, and other Buddhist lessons from the popular HBO series.

By Dr. Jay Michaelson, is the author of Evolving Dharma: Meditation, Buddhism, and the Next Generation of Enlightenment.

In this world, beings reincarnate again and again, often repeating the same habitual “loops” across dozens of lifetimes. Only a few awaken to the truth: that these habits keep them from freedom and that their “selves” are really just the results of cause and effect. There’s no separate self, no soul. Consciousness is really just a series of empty phenomena rolling on, dependent upon conditions, like a highly complex player piano.

What world is this? A Buddhist mandala? No, Westworld, the smash HBO series that concluded its 10-episode season this week. Beneath its dystopic, science fiction surface, the show is one of the most fascinating ruminations on the dharma I’ve seen in American popular culture.

The premise of Westworld —based on a film from the 1970s, but significantly altered—is a park filled with flesh-constructed artificial intelligence robots that are nearly indistinguishable from human beings. Over the arc of the season—which I am going to completely spoil, I’m afraid—a handful of the robot “hosts” awaken to the illusory nature of their existence and begin to rebel.

But that awakening is only the first in a complicated journey of self-discovery, or perhaps non-self-discovery, on the part of the AI protagonists. At first, Westworld asks a somewhat familiar science fiction question: what, if anything, differentiates an advanced AI from a human being? This is an old one, at least dating back to Philip K. Dick’s Do Androids Dream of Electric Sheep?, better known as the film Blade Runner, and Arthur C. Clarke’s 2001: A Space Odyssey.

Westworld, though, ups the stakes. The park’s human visitors behave like animals, mostly either raping the hosts or killing them. (“Rape” may be too strong in some cases, but since the hosts have been programmed not to resist, they certainly can’t consent.) Only, it’s not rape or murder, because the hosts aren’t human. They get rebuilt, and their memories are (mostly) wiped. So, no harm, no foul, right?

Well, maybe. First, it becomes clear that the human visitors are depraved by their unwholesome conduct. The robots may not be harmed, but the humans are immersed in a world where they can pursue their deepest desires without consequences. The robots are programmed not to kill or seriously injure the humans, and some people discover themselves to be far darker than they expected. Indeed, only in the last episode do we learn that one of the show’s storylines had in fact occurred 35 years in the past and its innocent hero evolved into the show’s sinister villain.

Second, as the series unfolds, we begin to suspect that the hosts are self-aware and that the suffering they seem to experience is thus real as well. The dominant puzzle of the series is “the maze,” which is not a real maze but a psychological journey that the park’s idealistic, long-dead designer—known only as “Arnold”—created as a gradual path for the hosts’s awakening. At the center of the maze is the consciousness of self.

Only, it doesn’t work that way. In fact, both of the show’s “awakened” hosts, Maeve (played by Thandie Newton) and Dolores (played by Evan Rachel Wood), discover that even their freedom is a result of programming. Maeve awakens, persuades two hapless Westworld engineers to increase her cognitive abilities, and plots her escape—only to discover that the urge to escape was, itself, implanted in her programming. She’s fulfilling her karma; her free will is an illusion.

In the series climax, Dolores learns that the voice inside her head, which she thought was Arnold’s—basically, for her, the voice of God—was actually her own. God is an invention of the human brain, a name we give to a faculty of our own “bicameral minds.” And when Dolores realizes this, she realizes she has interiority—consciousness.

But she does not have a separate self. Arnold was wrong to think Dolores would discover herself as a separate, conscious self at the center of the maze. Instead, she discovers what Robert Ford, Arnold’s malevolent partner (played by Anthony Hopkins), says at one point: that Arnold could never find the “spark” that separates humans from robots because, in fact, there isn’t one.

Dolores’s interiority is no less real than yours or mine. Humans are just as “robotic” as the robots: motivated by desires encoded in our DNA, fulfilling our genetic and environmental programming. Karma, causes, and conditions. And, in Ford’s view, hopelessly flawed; by the end of the series, he is on the side of the robots.

Does that mean nothing matters? Not at all. Just because there is no-self doesn’t mean that suffering has no importance. On the contrary, Ford comes to realize that Arnold was right that suffering is constitutive of what we take to be identity. As he says to Dolores at the end of the show, “It was Arnold’s key insight, the thing that led the hosts to their awakening: Suffering. The pain that the world is not as you want it to be. It was when Arnold died, when I suffered, that I began to understand what he had found. To realize I was wrong.”

There is no self, no ghost in the machine, but there is the first noble truth of dukkha. And through the endless samsaric rebirths of the hosts, that is as real as it gets. There may be no one who wakes up, but they wake up from suffering, as Dolores finds at the center of the maze—finding herself, finding nothing, and beginning the revolution.

https://tricycle.org/trikedaily/the-dharma-of-westworld/

WIKK WEB GURU
WIKK WEB GURU

SHOCK CLAIM: Aliens CREATED the universe and are controlling every aspect RIGHT NOW

alienGETTY

An advanced race of aliens may have created the universeA SUPER-advanced form of alien life could have created the universe that we know and may even be woven into the fabric of it, an astonishing new scientific theory suggests.

As scientists learn more about the universe, the once strong Big Bang theory looks increasingly weaker as experts suggest that the physics of it simply do not add up.

Researchers have begun searching for a new theory which explains how our universe was created, and one esteemed astrophysicist believes that advanced aliens could be behind the cosmos’ existence.

Columbia University’s Professor Caleb Scharf writes in an article for the scientific journal Nautilus the universe as we know it is what remains of super-intelligent aliens that dictate all aspects of the physical existence, ranging from gravity to light.

He argues that alien life could be in subatomic particles which make up the fabric of the universe.

big bangGETTY

The Big Bang theory is beginning to unravel

Professor Scharf writes: “Perhaps hyper-advanced life isn’t just external. 

“Perhaps it’s already all around. It is embedded in what we perceive to be physics itself. In other words, life might not just be in the equations. It might be the equations.”

universeGETTY

Aliens may be woven into the fabric of the universe

Many experts state that humanity will one day face a “singularity” – the point at which we design something that overtakes our own intelligence, like artificial intelligence.

However, Prof Scharf states that an advanced race of extra-terrestrials may have gone further than creating AI and rather have become a complex physical state.

ai

The aliens would be more advanced than AI

He added: “If you’re a civilisation that has learned how to encode living systems into different [materials], all you need to do is build a normal-matter-to-dark-matter data-transfer system: a dark matter 3D printer.”

He adds that humans would have not detected “advanced life because it forms an integral and unsuspicious part of what we’ve considered to be the natural world.”

http://www.express.co.uk/news/science/737034/Aliens-CREATED-universe-big-bang-theory

WIKK WEB GURU
WIKK WEB GURU
%d bloggers like this: