Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Deep Learning. Show all posts
Showing posts with label Deep Learning. Show all posts

Thursday, April 13, 2023

Why artificial intelligence needs to understand consequences

Neil Savage
Nature
Originally published 24 FEB 23

Here is an excerpt:

The headline successes of AI over the past decade — such as winning against people at various competitive games, identifying the content of images and, in the past few years, generating text and pictures in response to written prompts — have been powered by deep learning. By studying reams of data, such systems learn how one thing correlates with another. These learnt associations can then be put to use. But this is just the first rung on the ladder towards a loftier goal: something that Judea Pearl, a computer scientist and director of the Cognitive Systems Laboratory at the University of California, Los Angeles, refers to as “deep understanding”.

In 2011, Pearl won the A.M. Turing Award, often referred to as the Nobel prize for computer science, for his work developing a calculus to allow probabilistic and causal reasoning. He describes a three-level hierarchy of reasoning4. The base level is ‘seeing’, or the ability to make associations between things. Today’s AI systems are extremely good at this. Pearl refers to the next level as ‘doing’ — making a change to something and noting what happens. This is where causality comes into play.

A computer can develop a causal model by examining interventions: how changes in one variable affect another. Instead of creating one statistical model of the relationship between variables, as in current AI, the computer makes many. In each one, the relationship between the variables stays the same, but the values of one or several of the variables are altered. That alteration might lead to a new outcome. All of this can be evaluated using the mathematics of probability and statistics. “The way I think about it is, causal inference is just about mathematizing how humans make decisions,” Bhattacharya says.

Bengio, who won the A.M. Turing Award in 2018 for his work on deep learning, and his students have trained a neural network to generate causal graphs5 — a way of depicting causal relationships. At their simplest, if one variable causes another variable, it can be shown with an arrow running from one to the other. If the direction of causality is reversed, so too is the arrow. And if the two are unrelated, there will be no arrow linking them. Bengio’s neural network is designed to randomly generate one of these graphs, and then check how compatible it is with a given set of data. Graphs that fit the data better are more likely to be accurate, so the neural network learns to generate more graphs similar to those, searching for one that fits the data best.

This approach is akin to how people work something out: people generate possible causal relationships, and assume that the ones that best fit an observation are closest to the truth. Watching a glass shatter when it is dropped it onto concrete, for instance, might lead a person to think that the impact on a hard surface causes the glass to break. Dropping other objects onto concrete, or knocking a glass onto a soft carpet, from a variety of heights, enables a person to refine their model of the relationship and better predict the outcome of future fumbles.

Wednesday, November 13, 2019

MIT Creates World’s First Psychopath AI By Feeding It Reddit Violent Content

MIT Creates World's First Psychopath AI By Feeding It Reddit Violent ContentNavin Bondade
www.techgrabyte.com
Originally posted October 2019

The state of the psychopathic is wider and darker in human intelligence that we haven’t fully understood yet, but still, scientists have given a try and to implement Psychopathism in Artificial Intelligence.

Scientists at MIT have created the world’s First Psychopath AI called Norman. The purpose of Norman AI is to demonstrate that AI cannot be unfair and biased unless such data is fed into it.

MIT’s Scientists have created Norman by training it on violent and gruesome content like images of people dying in gruesome circumstances from an unnamed Reddit page before showing it a series of Rorschach inkblot tests.

The Scientists created a dataset from this unnamed Reddit page and trained Norman to perform image captioning. This data is dedicated to documents and observe the disturbing reality of death.

The info is here.

Monday, May 14, 2018

Computer Says No: Part 2 - Explainability

Jasmine Leonard
theRSA.org
Originally posted March 23, 2018

Here is an expert:

The trouble is, since many decisions should be explainable, it’s tempting to assume that automated decision systems should also be explainable.  But as discussed earlier, automated decisions systems don’t actually make decisions; they make predictions.  And when a prediction is used to guide a decision, the prediction is itself part of the explanation for that decision.  It therefore doesn’t need to be explained itself, it merely needs to be justifiable.

This is a subtle but important distinction.  To illustrate it, imagine you were to ask your doctor to explain her decision to prescribe you a particular drug.  She could do so by saying that the drug had cured many other people with similar conditions in the past and that she therefore predicted it would cure you too.  In this case, her prediction that the drug will cure you is the explanation for her decision to prescribe it.  And it’s a good explanation because her prediction is justified – not on the basis of an explanation of how the drug works, but on the basis that it’s proven to be effective in previous cases.  Indeed, explanations of how drugs work are often not available because the biological mechanisms by which they operate are poorly understood, even by those who produce them.  Moreover, even if your doctor could explain how the drug works, unless you have considerable knowledge of pharmacology, it’s unlikely that the explanation would actually increase your understanding of her decision to prescribe the drug.

If explanations of predictions are unnecessary to justify their use in decision-making then, what else can justify the use of a prediction made by an automated decision system?  The best answer, I believe, is that the system is shown to be sufficiently accurate.  What “sufficiently accurate” means is obviously up for debate, but at a minimum I would suggest that it means the system’s predictions are at least as accurate as those produced by a trained human.  It also means that there are no other readily available systems that produce more accurate predictions.

The article is here.

Monday, October 16, 2017

Can we teach robots ethics?

Dave Edmonds
BBC.com
Originally published October 15, 2017

Here is an excerpt:

However machine learning throws up problems of its own. One is that the machine may learn the wrong lessons. To give a related example, machines that learn language from mimicking humans have been shown to import various biases. Male and female names have different associations. The machine may come to believe that a John or Fred is more suitable to be a scientist than a Joanna or Fiona. We would need to be alert to these biases, and to try to combat them.

A yet more fundamental challenge is that if the machine evolves through a learning process we may be unable to predict how it will behave in the future; we may not even understand how it reaches its decisions. This is an unsettling possibility, especially if robots are making crucial choices about our lives. A partial solution might be to insist that if things do go wrong, we have a way to audit the code - a way of scrutinising what's happened. Since it would be both silly and unsatisfactory to hold the robot responsible for an action (what's the point of punishing a robot?), a further judgement would have to be made about who was morally and legally culpable for a robot's bad actions.

One big advantage of robots is that they will behave consistently. They will operate in the same way in similar situations. The autonomous weapon won't make bad choices because it is angry. The autonomous car won't get drunk, or tired, it won't shout at the kids on the back seat. Around the world, more than a million people are killed in car accidents each year - most by human error. Reducing those numbers is a big prize.

The article is here.

Thursday, October 12, 2017

New Theory Cracks Open the Black Box of Deep Learning

Natalie Wolchover
Quanta Magazine
Originally published September 21, 2017

Here is an excerpt:

In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new computer experiments by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during deep learning, at least in the cases they studied.

Tishby’s findings have the AI community buzzing. “I believe that the information bottleneck idea could be very important in future deep neural network research,” said Alex Alemi of Google Research, who has already developed new approximation methods for applying an information bottleneck analysis to large deep neural networks. The bottleneck could serve “not only as a theoretical tool for understanding why our neural networks work as well as they do currently, but also as a tool for constructing new objectives and architectures of networks,” Alemi said.

Some researchers remain skeptical that the theory fully accounts for the success of deep learning, but Kyle Cranmer, a particle physicist at New York University who uses machine learning to analyze particle collisions at the Large Hadron Collider, said that as a general principle of learning, it “somehow smells right.”

The article is here.

Tuesday, April 25, 2017

Artificial synapse on a chip will help mobile devices learn like the human brain

Luke Dormehl
Digital Trends
Originally posted April 6, 2017

Brain-inspired deep learning neural networks have been behind many of the biggest breakthroughs in artificial intelligence seen over the past 10 years.

But a new research project from the National Center for Scientific Research (CNRS), the University of Bordeaux, and Norwegian information technology company Evry could take that these breakthroughs to next level — thanks to the creation of an artificial synapse on a chip.

“There are many breakthroughs from software companies that use algorithms based on artificial neural networks for pattern recognition,” Dr. Vincent Garcia, a CNRS research scientist who worked on the project, told Digital Trends. “However, as these algorithms are simulated on standard processors they require a lot of power. Developing artificial neural networks directly on a chip would make this kind of tasks available to everyone, and much more power efficient.”

Synapses in the brain function as the connections between neurons. Learning takes place when these connections are reinforced, and improved when synapses are stimulated. The newly developed electronic devices (called “memristors”) emulate the behavior of these synapses, by way of a variable resistance that depends on the history of electronic excitations they receive.

The article is here.

Tuesday, January 10, 2017

What is Artificial Intelligence Anyway?

Benedict Dellot
RSA.org
Originally published December 15, 2016

Here is an excerpt:

Machine learning is the main reason for the renewed interest in artificial intelligence, but deep learning is where the most exciting innovations are happening today. Considered by some to be a subfield of machine learning, this new approach to AI is informed by neurological insights about how the human brain functions and the way that neurons connect with one another.

Deep learning systems are formed of artificial neural networks that exist on multiple layers (hence the word ‘deep’), with each layer given the task of making sense of a different pattern in images, sounds or texts. The first layer may detect rudimentary patterns, for example the outline of an object, whereas the next layer may identify a band of colours. And the process is repeated across all the layers and across all the data until the system can cluster the various patterns to create distinct categories of, say, objects or words.

Deep learning is particularly impressive because, unlike the conventional machine learning approach, it can often proceed without humans ever having defined the categories in advance, whether they be objects, sounds or phrases. The distinction here is between supervised and unsupervised learning, and the latter is showing evermore impressive results. According to a King’s College London study, deep learning techniques more than doubled the accuracy of brain age assessments when using raw data from MRI scans.

The blog post is here.

Tuesday, April 5, 2016

The momentous advance in artificial intelligence demands a new set of ethics

Jason Millar
The Guardian
March 12, 2016

Here is an excerpt:

AI is also increasingly able to manage complex, data intensive tasks, such as monitoring credit card systems for fraudulent behaviour, high-frequency stock trading and detecting cyber security threats. Embodied as robots, deep-learning AI is poised to begin to move and work among us – in the form of service, transportation, medical and military robots.

Deep learning represents a paradigm shift in the relationship humans have with their technological creations. It results in AI that displays genuinely surprising and unpredictable behaviour. Commenting after his first loss, Lee described being stunned by an unconventional move he claimed no human would ever have made. Demis Hassabis, one of DeepMind’s founders, echoed the sentiment: “We’re very pleased that AlphaGo played some quite surprising and beautiful moves.”

Alan Turing, the visionary computer scientist, predicted we would someday speak of machines that think. He never predicted this.

The article is here.