This Psychologist Is Using A.I. to Predict Who Will Attempt Suicide

According to Joe Franklin, computers are far better than people when it comes to guessing who’s at risk

by Diane Shipley

The U.S suicide rate is at a 30-year high. According to the National Center for Health Statistics, in 2014 (the last year for which figures are available), 42,773 Americans took their own lives, most of them men.

It’s a crisis, one mental health professionals have historically been ill-equipped to handle. Last year, Joseph Franklin (then a postdoctoral fellow at Harvard, now an assistant professor of clinical psychology at Florida State University) looked at 365 studies on suicide over the past 50 years and found that someone flipping a coin had the same chance of correctly predicting whether a patient would die by suicide as an experienced psychiatrist — 50/50.

If humans are so mediocre when it comes to gauging suicidal intentions, could machines be better? Signs point to yes. IBM’s Watson supercomputer diagnosed a rare cancer doctors missed, while in England, the National Health Service is trying out Google’s DeepMind artificial intelligence for everything from diagnosing eye illnesses to finding out how best to target radiotherapy.

The link between A.I and mental health is less hyped, but Franklin and his team have developed algorithms that can predict whether someone will die by suicide with over 80 percent accuracy. He hopes they may soon become standard, in the form of software that every clinician has access to — and thus help save lives.

What made you want to study suicide prediction?
When I got into suicide research, I wanted to look at everything and see where we were. My hope was that would provide me and my colleagues with some more specific direction on what we knew and could build on. And what we found was quite surprising. We figured out that people have been doing this research where we’ve been very bad at predicting suicidal thoughts and behaviors and we really haven’t improved across 50 years.

Are there common misconceptions about suicide risk?
A lot of people believe that only someone who is showing clear signs of depression is likely to have this happen. I’m not saying depression has nothing to do with it, but it’s not synonymous with that. We can conservatively say 96 percent of people who’ve had severe depression aren’t going to die by suicide.

Most of our theories which say this one thing causes suicide or this combination of three or four things causes suicide — it looks like none of those are going to be adequate. They may all be partially correct but maybe only account for 5 percent of what happens. Our theories have to take into account the fact that hundreds if not thousands of things contribute to suicidal thoughts and behaviors.

More men take their lives than women, but more women attempt suicide. Are there any theories why?
One thing people point to now is something called suicide capability, which is basically a fearlessness about death and an ability to enact death, and one assumption is that men, particularly older men, may be more capable of engaging with these behaviors. But evidence on that right now is not conclusive.

Are traditional risk assessments getting some things right?
Talking to people, not making it this taboo subject, I think that’s great. The problem is we haven’t given them much to go on. Our implicit goal has often been to do research so we can tell clinicians what the most important factors are, and what we’re finding is that we’re just not very accurate.

What we’re going to have to do is this artificial intelligence approach so that all clinicians are able to have something that automatically delivers a very accurate score of where this person is in terms of risk. I think we should be trying to develop that instead of, you know, “these are the five questions to ask.”

How does artificial intelligence predict who is most at risk?
We took thousands of people in this medical database and pored through their records, labeled the ones who had clearly attempted suicide on a particular date and ones that could not to be determined to have attempted suicide, and we then let a machine-learning program run its course. We then applied it to a new set of data to make sure that it worked. The machine has now learned, at least within this particular database of millions of people, what the optimal algorithm seems to be for separating people who are and are not going to attempt suicide…

more…

https://melmagazine.com/this-psychologist-is-using-a-i-to-predict-who-will-attempt-suicide-696cd24bbc15

WIKK WEB GURU

 

fAIth

Resultado de imagem para From The Song of Los (1795) by William Blake. Courtesy Library of Congress

image edited by Web Investigator – From The Song of Los (1795) by William Blake. Courtesy Library of Congress

The most avid believers in artificial intelligence are aggressively secular – yet their language is eerily religious. Why?

by Beth Singler is a research associate at the Faraday Institute for Science and Religion, and an associate fellow at the Leverhulme Centre for the Future of Intelligence, both at the University of Cambridge. Her first book, The Indigo Children: New Age Experimentation with Self and Science, is forthcoming.

My stomach sank the moment the young man stood up. I’d observed him from afar during the coffee breaks, and I knew the word ‘Theologian’ was scrawled on the delegate badge pinned to his lapel, as if he’d been a last-minute addition the conference. He cleared his throat and asked the panel on stage how they’d solve the problem of selecting which moral codes we ought to program into artificially intelligent machines (AI). ‘For example, masturbation is against my religious beliefs,’ he said. ‘So I wonder how we’d go about choosing which of our morals are important?’

The audience of philosophers, technologists, ‘transhumanists’ and AI fans erupted into laughter. Many of them were well-acquainted with the so-called ‘alignment problem’, the knotty philosophical question of how we should bring the goals and objectives of our AI creations into harmony with human values. But the notion that religion might have something to add to the debate seemed risible to them. ‘Obviously we don’t want the AI to be a terrorist,’ a panellist later remarked. Whatever we get our AI to align with, it should be ‘nothing religious’.

At the same event, in New York, I introduced myself to a grey-haired computer scientist by saying that I was a researcher at the Faraday Institute for Science and Religion at the University of Cambridge. His immediate response: ‘Those two things can’t go together.’ The religious reaction to AI was about as relevant as the religious response to renewable energy, he said – that is, not at all. It was only later that it occurred to me that many of President Donald Trump’s evangelical Christian supporters give lie to his claim. Some have very distinct views on the ‘distractions’ of renewable energy, on climate change, and on how God has willed this planet and all its resources to us to use exactly as we wish.

The odd thing about the anti-clericalism in the AI community is that religious language runs wild in its ranks, and in how the media reports on it. There are AI ‘oracles’ and technology ‘evangelists’ of a future that’s yet to come, plus plenty of loose talk about angels, gods and the apocalypse. Ray Kurzweil, an executive at Google, is regularly anointed a ‘prophet’ by the media – sometimes as a prophet of a coming wave of ‘superintelligence’ (a sapience surpassing any human’s capability); sometimes as a ‘prophet of doom’ (thanks to his pronouncements about the dire prospects for humanity); and often as a soothsayer of the ‘singularity’ (when humans will merge with machines, and as a consequence live forever). The tech folk who also invoke these metaphors and tropes operate in overtly and almost exclusively secular spaces, where rationality is routinely pitched against religion. But believers in a ‘transhuman’ future – in which AI will allow us to transcend the human condition once and for all – draw constantly on prophetic and end-of-days narratives to understand what they’re striving for

more…

https://aeon.co/essays/why-is-the-language-of-transhumanists-and-religion-so-similar

WIKK WEB GURU

We Need Conscious Robots

Kanai_BRH. Armstrong Roberts / ClassicStock / Getty Images

How introspection and imagination make robots better.

People often ask me whether human-level artificial intelligence will eventually become conscious. My response is: Do you want it to be conscious? I think it is largely up to us whether our machines will wake up.

That may sound presumptuous. The mechanisms of consciousness—the reasons we have a vivid and direct experience of the world and of the self—are an unsolved mystery in neuroscience, and some people think they always will be; it seems impossible to explain subjective experience using the objective methods of science. But in the 25 or so years that we’ve taken consciousness seriously as a target of scientific scrutiny, we have made significant progress. We have discovered neural activity that correlates with consciousness, and we have a better idea of what behavioral tasks require conscious awareness. Our brains perform many high-level cognitive tasks subconsciously.

Consciousness, we can tentatively conclude, is not a necessary byproduct of our cognition. The same is presumably true of AIs. In many science-fiction stories, machines develop an inner mental life automatically, simply by virtue of their sophistication, but it is likelier that consciousness will have to be expressly designed into them.

And we have solid scientific and engineering reasons to try to do that. Our very ignorance about consciousness is one. The engineers of the 18th and 19th centuries did not wait until physicists had sorted out the laws of thermodynamics before they built steam engines. It worked the other way round: Inventions drove theory. So it is today. Debates on consciousness are often too philosophical and spin around in circles without producing tangible results. The small community of us who work on artificial consciousness aims to learn by doing.

Furthermore, consciousness must have some important function for us, or else evolution wouldn’t have endowed us with it. The same function would be of use to AIs. Here, too, science fiction might have misled us. For the AIs in books and TV shows, consciousness is a curse. They exhibit unpredictable, intentional behaviors, and things don’t turn out well for the humans. But in the real world, dystopian scenarios seem unlikely. Whatever risks AIs may pose do not depend on their being conscious. To the contrary, conscious machines could help us manage the impact of AI technology. I would much rather share the world with them than with thoughtless automatons.

When AlphaGo was playing against the human Go champion, Lee Sedol, many experts wondered why AlphaGo played the way it did. They wanted some explanation, some understanding of AlphaGo’s motives and rationales. Such situations are common for modern AIs, because their decisions are not preprogrammed by humans, but are emergent properties of the learning algorithms and the data set they are trained on. Their inscrutability has created concerns about unfair and arbitrary decisions. Already there have been cases of discrimination by algorithms; for instance, a Propublica investigation last year found that an algorithm used by judges and parole officers in Florida flagged black defendants as more prone to recidivism than they actually were, and white defendants as less prone than they actually were.

Beginning next year, the European Union will give its residents a legal “right to explanation.” People will be able to demand an accounting of why an AI system made the decision it did. This new requirement is technologically demanding. At the moment, given the complexity of contemporary neural networks, we have trouble discerning how AIs produce decisions, much less translating the process into a language humans can make sense of.

In the real world, dystopian scenarios seem unlikely.

If we can’t figure out why AIs do what they do, why don’t we ask them? We can endow them with metacognition—an introspective ability to report their internal mental states. Such an ability is one of the main functions of consciousness. It is what neuroscientists look for when they test whether humans or animals have conscious awareness. For instance, a basic form of metacognition, confidence, scales with the clarity of conscious experience. When our brain processes information without our noticing, we feel uncertain about that information, whereas when we are conscious of a stimulus, the experience is accompanied by high confidence: “I definitely saw red!”…

more…

http://nautil.us/issue/47/consciousness/we-need-conscious-robots

WIKK WEB GURU

Raising good robots

Resultado de imagem para Gael Rougegrez of the Blanca Li Dance Company performs ‘Robot’, 22 February 2017 in London, England. Photo by Ian Gavan/Getty

Image edited by Web Investigator- Gael Rougegrez of the Blanca Li Dance Company performs ‘Robot’, 22 February 2017 in London, England. Photo by Ian Gavan/Getty

We already have a way to teach morals to alien intelligences: it’s called parenting. Can we apply the same methods to robots?

Regina Rini is an assistant professor and faculty fellow at the New York University Center for Bioethics, and an affiliate faculty member in the Medical Ethics division of the NYU Department of Population Health.

 

Intelligent machines, long promised and never delivered, are finally on the horizon. Sufficiently intelligent robots will be able to operate autonomously from human control. They will be able to make genuine choices. And if a robot can make choices, there is a real question about whether it will make moral choices. But what is moral for a robot? Is this the same as what’s moral for a human?

Philosophers and computer scientists alike tend to focus on the difficulty of implementing subtle human morality in literal-minded machines. But there’s another problem, one that really ought to come first. It’s the question of whether we ought to try to impose our own morality on intelligent machines at all. In fact, I’d argue that doing so is likely to be counterproductive, and even unethical. The real problem of robot morality is not the robots, but us. Can we handle sharing the world with a new type of moral creature?

We like to imagine that artificial intelligence (AI) will be similar to humans, because we are the only advanced intelligence we know. But we are probably wrong. If and when AI appears, it will probably be quite unlike us. It might not reason the way we do, and we could have difficulty understanding its choices.

In 2016, a computer program challenged Lee Sedol, humanity’s leading player of the ancient game of Go. The program, a Google project called AlphaGo, is an early example of what AI might be like. In the second game of the match, AlphaGo made a move – ‘Move 37’ – that stunned expert commenters. Some thought it was a mistake. Lee, the human opponent, stood up from the table and left the room. No one quite knew what AlphaGo was doing; this was a tactic that expert human players simply did not use. But it worked. AlphaGo won that match, as it had the game before and the next game. In the end, Lee won only a single game out of five.

AlphaGo is very, very good at Go, but it is not good in the same way that humans are. Not even its creators can explain how it settles on its strategy in each game. Imagine that you could talk to AlphaGo and ask why it made Move 37. Would it be able to explain the choice to you – or to human Go experts? Perhaps. Artificial minds needn’t work as ours do to accomplish similar tasks.

In fact, we might discover that intelligent machines think about everything, not just Go, in ways that are alien us. You don’t have to imagine some horrible science-fiction scenario, where robots go on a murderous rampage. It might be something more like this: imagine that robots show moral concern for humans, and robots, and most animals… and also sofas. They are very careful not to damage sofas, just as we’re careful not to damage babies. We might ask the machines: why are you so worried about sofas? And their explanation might not make sense to us, just as AlphaGo’s explanation of Move 37 might not make sense.

This line of thinking takes us to the heart of a very old philosophical puzzle about the nature of morality. Is it something above and beyond human experience, something that applies to anyone or anything that could make choices – or is morality a distinctly human creation, something specially adapted to our particular existence?

Long before robots, the ancient Greeks had to grapple with the morality of a different kind of alien mind: the teenager. The Greeks worried endlessly about how to cultivate morality in their youth. Plato thought that our human concept of justice, like all human concepts, was a pale reflection of some perfect form of Justice. He believed that we have an innate acquaintance with these forms, but that we understand them only dimly as children. Perhaps we will encounter pure Justice after death, but the task of philosophy is to try to reason our way back to these truths while we are still living…

more…

https://aeon.co/essays/creating-robots-capable-of-moral-reasoning-is-like-parenting

WIKK WEB GURU

Has humanity already lost control of artificial intelligence? Scientists admit that computers are learning too quickly for humans to keep up

Scientists are concerned that computers are already overtaking us in their abilities, raising the prospect that we could lose control of them altogether. Pictured is the Terminator film, in which robots take over - a prospect that could soon become a reality

 Scientists are concerned that computers are already overtaking us in their abilities, raising the prospect that we could lose control of them altogether. Pictured is the Terminator film, in which robots take over – a prospect that could soon become a reality
  • Last year, scientists made a driverless car that learned by watching humans
  • But even the creators of the car did not understand how it learned this way
  • In another study, a computer could pinpoint people with schizophrenia
  • Again, its creators were unsure how it was able to do this 

From driving cars to beating chess masters at their own game, computers are already performing incredible feats.

And artificial intelligence is quickly advancing, allowing computers to learn from experience without the need for human input.

But scientists are concerned that computers are already overtaking us in their abilities, raising the prospect that we could lose control of them altogether.

ROBOT TAKEOVER

A recent report by PwC found that four in 10 jobs are at risk of being replaced by robots.

The report also found that 38 per cent of US jobs will be replaced by robots and artificial intelligence by the early 2030s.

The analysis revealed that 61 per cent of financial services jobs are at risk of a robot takeover.

This is compared to 30 per cent of UK jobs, 35 per cent of Germany and 21 per cent in Japan.

Last year, a driverless car took to the streets of New Jersey, which ran without any human intervention.

The car, created by Nvidia, could make its own decisions after watching how humans learned how to drive.

But despite creating the car, Nvidia admitted that it wasn’t sure how the car was able to learn in this way, according to MIT Technology Review.

The car’s underlying technology was ‘deep learning’ – a powerful tool based on the neural layout of the human brain.

Deep learning is used in a range of technologies, including tagging your friends on social media, and allowing Siri to answer questions.

The system is also being used by the military, which hopes to use deep learning to steer ships, destroy targets and control deadly drones.

There is also hope that deep learning could be used in medicine to diagnose rare diseases.

But if its creators lose control of the system, we’re in big trouble, experts claim…

Read more: http://www.dailymail.co.uk/sciencetech/article-4401836/Has-humanity-lost-control-artificial-intelligence.html#ixzz4e2FVAjWz
Follow us: @MailOnline on Twitter | DailyMail on Facebook

WIKK WEB GURU

Intelligence: a history

Resultado de imagem para Intelligent assumptions? At the Oxford Union, 1950.

Intelligent assumptions? At the Oxford Union, 1950. From the Picture Post feature, Eternal Oxford. Photo by John Chillingworth/Getty

Intelligence has always been used as fig-leaf to justify domination and destruction. No wonder we fear super-smart robots

Stephen Cave is executive director and senior research fellow of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. A philosopher by training, he has also served as a British diplomat, and written widely on philosophical and scientific subjects, including for The New York Times, The Atlantic, Guardian and others.

As I was growing up in England in the latter half of the 20th century, the concept of intelligence loomed large. It was aspired to, debated and – most important of all – measured. At the age of 11, tens of thousands of us all around the country were ushered into desk-lined halls to take an IQ test known as the 11-Plus. The results of those few short hours would determine who would go to grammar school, to be prepared for university and the professions; who was destined for technical school and thence skilled work; and who would head to secondary modern school, to be drilled in the basics then sent out to a life of low-status manual labour.

The idea that intelligence could be quantified, like blood pressure or shoe size, was barely a century old when I took the test that would decide my place in the world. But the notion that intelligence could determine one’s station in life was already much older. It runs like a red thread through Western thought, from the philosophy of Plato to the policies of UK prime minister Theresa May. To say that someone is or is not intelligent has never been merely a comment on their mental faculties. It is always also a judgment on what they are permitted to do. Intelligence, in other words, is political.

Sometimes, this sort of ranking is sensible: we want doctors, engineers and rulers who are not stupid. But it has a dark side. As well as determining what a person can do, their intelligence – or putative lack of it – has been used to decide what others can do to them. Throughout Western history, those deemed less intelligent have, as a consequence of that judgment, been colonised, enslaved, sterilised and murdered (and indeed eaten, if we include non-human animals in our reckoning).

It’s an old, indeed an ancient, story. But the problem has taken an interesting 21st-century twist with the rise of Artificial Intelligence (AI). In recent years, the progress being made in AI research has picked up significantly, and many experts believe that these breakthroughs will soon lead to more. Pundits are by turn terrified and excited, sprinkling their Twitter feeds with Terminator references. To understand why we care and what we fear, we must understand intelligence as a political concept – and, in particular, its long history as a rationale for domination.

The term ‘intelligence’ itself has never been popular with English-language philosophers. Nor does it have a direct translation into German or ancient Greek, two of the other great languages in the Western philosophical tradition. But that doesn’t mean philosophers weren’t interested in it. Indeed, they were obsessed with it, or more precisely a part of it: reason or rationality. The term ‘intelligence’ managed to eclipse its more old-fashioned relative in popular and political discourse only with the rise of the relatively new-fangled discipline of psychology, which claimed intelligence for itself. Although today many scholars advocate a much broader understanding of intelligence, reason remains a core part of it. So when I talk about the role that intelligence has played historically, I mean to include this forebear.

The story of intelligence begins with Plato. In all his writings, he ascribes a very high value to thinking, declaring (through the mouth of Socrates) that the unexamined life is not worth living. Plato emerged from a world steeped in myth and mysticism to claim something new: that the truth about reality could be established through reason, or what we might consider today to be the application of intelligence. This led him to conclude, in The Republic, that the ideal ruler is ‘the philosopher king’, as only a philosopher can work out the proper order of things. And so he launched the idea that the cleverest should rule over the rest – an intellectual meritocracy.

This idea was revolutionary at the time. Athens had already experimented with democracy, the rule of the people – but to count as one of those ‘people’ you just had to be a male citizen, not necessarily intelligent. Elsewhere, the governing classes were made up of inherited elites (aristocracy), or by those who believed they had received divine instruction (theocracy), or simply by the strongest (tyranny)…

more…

https://aeon.co/essays/on-the-dark-history-of-intelligence-as-domination

WIKK WEB GURU
WIKK WEB GURU

Company works to use ‘computer vision’ to help the visually impaired see

Image: Company works to use ‘computer vision’ to help the visually impaired see

 by:

(NaturalNews) A game-changing technological innovation for the blind has been developed by a tech startup, Eyra. The wearable assistant, Horus, consists of a headset with cameras and a pocket processor with battery. Horus utilizes the same technology that enables auto-drive cars and drones to navigate. Here we share good news regarding the application of artificial intelligence, versus warnings of cyborg soldiers and job-stealing robots.

From Eyra’s website Horus.tech, “Horus is a wearable device that observes, understands and describes the environment to the person using it, providing useful information with the right timing and in a discreet way using bone conduction. Horus is able to read texts, to recognize faces, objects and much more.

“Thanks to the latest advances in artificial intelligence, Horus is able to describe what the cameras are seeing. Whether it is a postcard, a photograph or a landscape, the device provides a short description of what is in front of it.”

Here are some details about the mechanisms that bring ‘sight’ to the blind:

Text recognition

Horus can recognize and read aloud printed texts, including on curved surfaces. When Horus acquires the targeted text, it will begin to recite, and at that point it is not required that the camera remain directed at the text. Horus also gives audible cues to the user to keep the text properly framed.

Face recognition

Utilizing facial feature metrics, Horus can learn an unknown face within seconds and add that person into its database, upon spoken request. After a face is learned, and upon subsequently detecting that face, Horus will at once notify the user.

Object recognition

If the user simply rotates the item in front of the cameras, Horus can perceive an object’s appearance and shape in three dimensions. Since Horus can identify an object from various angles, it can help the user recognize similarly shaped objects. As with text recognition, if needed, Horus will prompt the user to move the object into the cameras’ view.

Mobility assistance

When moving along a path, the user will be warned by Horus of any obstacles, via an alert sound. Its pitch, intensity, 3D positioning, and frequency of repetition will differ, depending on the object’s location and distance.

Tech website Engadget states:

“The startup was created by a pair of students from the University of Genoa who were looking to develop a computer vision system. While their research was centered around enabling robots to navigate, they found the technology had other applications. In the subsequent two years, they’ve been working on producing a portable version of the gear, and think that they’re getting close to completion. In the future, the device is also expected to offer up scene description that’ll offer users a greater ability to ‘see.’

“Should the pair secure the necessary funding, Horus will be released at some point in the near future, although it’ll be pretty pricey. The creators feel like the device will retail for something between €1,500 and €2,000 Euro, although if it can deliver on its promise, it may be money well spent.”

A bright future

Today’s world is in some ways negatively impacted by technology, as with ever-encroaching police state technocracy and breaches of privacy. But in many ways our lives are enhanced beyond any historical comparison. For example, the average working class person in the developed world, even those below the poverty level, has a higher standard of living than many kings of eras past. A word to the wise: be wary of present dangers to our freedoms and independence, but be ever hopeful of a better future, thanks to humanity’s propensity for technological advances that will make our lives vastly more enriched and livable.

Sources:

Horus.tech

Engadget.com

http://www.naturalnews.com/2017-01-01-company-works-to-use-computer-vision-to-help-the-blind-see.html

WIKK WEB GURU
WIKK WEB GURU

%d bloggers like this: