Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Machine Learning. Show all posts
Showing posts with label Machine Learning. Show all posts

Tuesday, May 9, 2017

Inside Libratus, the Poker AI That Out-Bluffed the Best Humans

Cade Metz
Wired Magazine
Originally published February 1, 2017

Here is an excerpt:

Libratus relied on three different systems that worked together, a reminder that modern AI is driven not by one technology but many. Deep neural networks get most of the attention these days, and for good reason: They power everything from image recognition to translation to search at some of the world’s biggest tech companies. But the success of neural nets has also pumped new life into so many other AI techniques that help machines mimic and even surpass human talents.

Libratus, for one, did not use neural networks. Mainly, it relied on a form of AI known as reinforcement learning, a method of extreme trial-and-error. In essence, it played game after game against itself. Google’s DeepMind lab used reinforcement learning in building AlphaGo, the system that that cracked the ancient game of Go ten years ahead of schedule, but there’s a key difference between the two systems. AlphaGo learned the game by analyzing 30 million Go moves from human players, before refining its skills by playing against itself. By contrast, Libratus learned from scratch.

Through an algorithm called counterfactual regret minimization, it began by playing at random, and eventually, after several months of training and trillions of hands of poker, it too reached a level where it could not just challenge the best humans but play in ways they couldn’t—playing a much wider range of bets and randomizing these bets, so that rivals have more trouble guessing what cards it holds. “We give the AI a description of the game. We don’t tell it how to play,” says Noam Brown, a CMU grad student who built the system alongside his professor, Tuomas Sandholm. “It develops a strategy completely independently from human play, and it can be very different from the way humans play the game.”

The article is here.

Monday, May 8, 2017

Raising good robots

Regina Rini
aeon.com
Originally published April 18, 2017

Intelligent machines, long promised and never delivered, are finally on the horizon. Sufficiently intelligent robots will be able to operate autonomously from human control. They will be able to make genuine choices. And if a robot can make choices, there is a real question about whether it will make moral choices. But what is moral for a robot? Is this the same as what’s moral for a human?

Philosophers and computer scientists alike tend to focus on the difficulty of implementing subtle human morality in literal-minded machines. But there’s another problem, one that really ought to come first. It’s the question of whether we ought to try to impose our own morality on intelligent machines at all. In fact, I’d argue that doing so is likely to be counterproductive, and even unethical. The real problem of robot morality is not the robots, but us. Can we handle sharing the world with a new type of moral creature?

We like to imagine that artificial intelligence (AI) will be similar to humans, because we are the only advanced intelligence we know. But we are probably wrong. If and when AI appears, it will probably be quite unlike us. It might not reason the way we do, and we could have difficulty understanding its choices.

The article is here.

Tuesday, May 2, 2017

AI Learning Racism, Sexism and Other Prejudices from Humans

Ian Johnston
The Independent
Originally published April 13, 2017

Artificially intelligent robots and devices are being taught to be racist, sexist and otherwise prejudiced by learning from humans, according to new research.

A massive study of millions of words online looked at how closely different terms were to each other in the text – the same way that automatic translators use “machine learning” to establish what language means.

Some of the results were stunning.

(cut)

“We have demonstrated that word embeddings encode not only stereotyped biases but also other knowledge, such as the visceral pleasantness of flowers or the gender distribution of occupations,” the researchers wrote.

The study also implies that humans may develop prejudices partly because of the language they speak.

“Our work suggests that behaviour can be driven by cultural history embedded in a term’s historic use. Such histories can evidently vary between languages,” the paper said.

The article is here.

Tuesday, April 25, 2017

Artificial synapse on a chip will help mobile devices learn like the human brain

Luke Dormehl
Digital Trends
Originally posted April 6, 2017

Brain-inspired deep learning neural networks have been behind many of the biggest breakthroughs in artificial intelligence seen over the past 10 years.

But a new research project from the National Center for Scientific Research (CNRS), the University of Bordeaux, and Norwegian information technology company Evry could take that these breakthroughs to next level — thanks to the creation of an artificial synapse on a chip.

“There are many breakthroughs from software companies that use algorithms based on artificial neural networks for pattern recognition,” Dr. Vincent Garcia, a CNRS research scientist who worked on the project, told Digital Trends. “However, as these algorithms are simulated on standard processors they require a lot of power. Developing artificial neural networks directly on a chip would make this kind of tasks available to everyone, and much more power efficient.”

Synapses in the brain function as the connections between neurons. Learning takes place when these connections are reinforced, and improved when synapses are stimulated. The newly developed electronic devices (called “memristors”) emulate the behavior of these synapses, by way of a variable resistance that depends on the history of electronic excitations they receive.

The article is here.

Thursday, April 6, 2017

How to Upgrade Judges with Machine Learning

by Tom Simonite
MIT Press
Originally posted March 6, 2017

Here is an excerpt:

The algorithm assigns defendants a risk score based on data pulled from records for their current case and their rap sheet, for example the offense they are suspected of, when and where they were arrested, and numbers and type of prior convictions. (The only demographic data it uses is age—not race.)

Kleinberg suggests that algorithms could be deployed to help judges without major disruption to the way they currently work in the form of a warning system that flags decisions highly likely to be wrong. Analysis of judges’ performance suggested they have a tendency to occasionally release people who are very likely to fail to show in court, or to commit crime while awaiting trial. An algorithm could catch many of those cases, says Kleinberg.

Richard Berk, a professor of criminology at the University of Pennsylvania, describes the study as “very good work,” and an example of a recent acceleration of interest in applying machine learning to improve criminal justice decisions. The idea has been explored for 20 years, but machine learning has become more powerful, and data to train it more available.

Berk recently tested a system with the Pennsylvania State Parole Board that advises on the risk a person will reoffend, and found evidence it reduced crime. The NBER study is important because it looks at how machine learning can be used pre-sentencing, an area that hasn’t been thoroughly explored, he says.

The article is here.

Editor's Note: I often wonder how much time until machine learning is applied to psychotherapy.

Thursday, January 19, 2017

Consider ethics when designing new technologies

by Gillian Christie and Derek Yach
Tech Crunch
Originally posted December 31, 2016

Here is an excerpt:

A Fourth Industrial Revolution is arising that will pose tough ethical questions with few simple, black-and-white answers. Smaller, more powerful and cheaper sensors; cognitive computing advancements in artificial intelligence, robotics, predictive analytics and machine learning; nano, neuro and biotechnology; the Internet of Things; 3D printing; and much more, are already demanding real answers really fast. And this will only get harder and more complex when we embed these new technologies into our bodies and brains to enhance our physical and cognitive functioning.

Take the choice society will soon have to make about autonomous cars as an example. If a crash cannot be avoided, should a car be programmed to minimize bystander casualties even if it harms the car’s occupants, or should the car protect its occupants under any circumstances?

Research demonstrates the public is conflicted. Consumers would prefer to minimize the number of overall casualties in a car accident, yet are unwilling to purchase a self-driving car if it is not self-protective. Of course, the ideal option is for companies to develop algorithms that bypass this possibility entirely, but this may not always be an option. What is clear, however, is that such ethical quandaries must be reconciled before any consumer hands over their keys to dark-holed algorithms.

The article is here.

Thursday, January 12, 2017

The Great A.I. Awakening

Gideon Lewis-Kraus
The New York Times
Originally published December 14, 2106

Here are two excerpts:

Google’s decision to reorganize itself around A.I. was the first major manifestation of what has become an industry wide machine-learning delirium. Over the past four years, six companies in particular — Google, Facebook, Apple, Amazon, Microsoft and the Chinese firm Baidu — have touched off an arms race for A.I. talent, particularly within universities. Corporate promises of resources and freedom have thinned out top academic departments. It has become widely known in Silicon Valley that Mark Zuckerberg, chief executive of Facebook, personally oversees, with phone calls and video-chat blandishments, his company’s overtures to the most desirable graduate students. Starting salaries of seven figures are not unheard-of. Attendance at the field’s most important academic conference has nearly quadrupled. What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform: pervasive, ambient artificial intelligence.

(cut)

The new wave of A.I.-enhanced assistants — Apple’s Siri, Facebook’s M, Amazon’s Echo — are all creatures of machine learning, built with similar intentions. The corporate dreams for machine learning, however, aren’t exhausted by the goal of consumer clairvoyance. A medical-imaging subsidiary of Samsung announced this year that its new ultrasound devices could detect breast cancer. Management consultants are falling all over themselves to prep executives for the widening industrial applications of computers that program themselves. DeepMind, a 2014 Google acquisition, defeated the reigning human grandmaster of the ancient board game Go, despite predictions that such an achievement would take another 10 years.

The article is here.

Tuesday, January 10, 2017

What is Artificial Intelligence Anyway?

Benedict Dellot
RSA.org
Originally published December 15, 2016

Here is an excerpt:

Machine learning is the main reason for the renewed interest in artificial intelligence, but deep learning is where the most exciting innovations are happening today. Considered by some to be a subfield of machine learning, this new approach to AI is informed by neurological insights about how the human brain functions and the way that neurons connect with one another.

Deep learning systems are formed of artificial neural networks that exist on multiple layers (hence the word ‘deep’), with each layer given the task of making sense of a different pattern in images, sounds or texts. The first layer may detect rudimentary patterns, for example the outline of an object, whereas the next layer may identify a band of colours. And the process is repeated across all the layers and across all the data until the system can cluster the various patterns to create distinct categories of, say, objects or words.

Deep learning is particularly impressive because, unlike the conventional machine learning approach, it can often proceed without humans ever having defined the categories in advance, whether they be objects, sounds or phrases. The distinction here is between supervised and unsupervised learning, and the latter is showing evermore impressive results. According to a King’s College London study, deep learning techniques more than doubled the accuracy of brain age assessments when using raw data from MRI scans.

The blog post is here.

Friday, December 30, 2016

The ethics of algorithms: Mapping the debate

Brent Daniel Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, Luciano Floridi
Big Data and Society
DOI: 10.1177/2053951716679679, Dec 2016

Abstract

In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.

The article is here.

Monday, December 5, 2016

The Simple Economics of Machine Intelligence

Ajay Agrawal, Joshua Gans, and Avi Goldfarb
Harvard Business Review
Originally published November 17, 2016

Here are two excerpts:

The first effect of machine intelligence will be to lower the cost of goods and services that rely on prediction. This matters because prediction is an input to a host of activities including transportation, agriculture, healthcare, energy manufacturing, and retail.

When the cost of any input falls so precipitously, there are two other well-established economic implications. First, we will start using prediction to perform tasks where we previously didn’t. Second, the value of other things that complement prediction will rise.

(cut)

As machine intelligence improves, the value of human prediction skills will decrease because machine prediction will provide a cheaper and better substitute for human prediction, just as machines did for arithmetic. However, this does not spell doom for human jobs, as many experts suggest. That’s because the value of human judgment skills will increase. Using the language of economics, judgment is a complement to prediction and therefore when the cost of prediction falls demand for judgment rises. We’ll want more human judgment.

The article is here.

Thursday, November 10, 2016

The Ethics of Algorithms: Mapping the Debate

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S. and Floridi, L. 2016 (in press). ‘The Ethics of Algorithms: Mapping the Debate’. Big Data & Society

Abstract

In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms.And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.

The book chapter is here.

Wednesday, August 31, 2016

How Artificial Intelligence Could Help Diagnose Mental Disorders

Joseph Frankel
The Atlantic
Originally posted August 23, 2016

Here is an excerpt:

In addition to the schizophrenia screener, an idea that earned Schwoebel an award from the American Psychiatric Association, NeuroLex is hoping to develop a tool for psychiatric patients who are already being treated in hospitals. Rather than trying to help diagnose a mental disorder from a single sample, the AI would examine a patient’s speech over time to track their progress.

For Schwoebel, this work is personal: he thinks this approach may help solve problems his older brother faced in seeking treatment for schizophrenia. Before his first psychotic break, Schwoebel’s brother would send short, one-word responses, or make cryptic to references to going “there” or “here”—worrisome abnormalities that “all made sense” after his brother’s first psychotic episode, he said.

The article is here.

Thursday, July 21, 2016

Frankenstein’s paperclips

The Economist
Originally posted June 25, 2016

Here is an excerpt:

AI researchers point to several technical reasons why fear of AI is overblown, at least in its current form. First, intelligence is not the same as sentience or consciousness, says Mr Ng, though all three concepts are commonly elided. The idea that machines will “one day wake up and change their minds about what they will do” is just not realistic, says Francesca Rossi, who works on the ethics of AI at IBM. Second, an “intelligence explosion” is considered unlikely, because it would require an AI to make each version of itself in less time than the previous version as its intelligence grows. Yet most computing problems, even much simpler ones than designing an AI, take much longer as you scale them up.

Third, although machines can learn from their past experiences or environments, they are not learning all the time.

The article is here.

Thursday, May 19, 2016

Anticipating artificial intelligence

Editorial Board
Nature
Originally posted April 26, 2016

Here is an excerpt:

So, what are the risks? Machines and robots that outperform humans across the board could self-improve beyond our control — and their interests might not align with ours. This extreme scenario, which cannot be discounted, is what captures most popular attention. But it is misleading to dismiss all concerns as worried about this.

There are more immediate risks, even with narrow aspects of AI that can already perform some tasks better than humans can. Few foresaw that the Internet and other technologies would open the way for mass, and often indiscriminate, surveillance by intelligence and law-enforcement agencies, threatening principles of privacy and the right to dissent. AI could make such surveillance more widespread and more powerful.

Then there are cybersecurity threats to smart cities, infrastructure and industries that become overdependent on AI — and the all too clear threat that drones and other autonomous offensive weapons systems will allow machines to make lethal decisions alone.

The article is here.

Monday, May 16, 2016

Inside OpenAI

Cade Metz
Wired.com
Originally posted April 27, 2016

Here is an excerpt:

OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called “reinforcement learning”—one of the key technologies that, among other things, drove the creation of AlphaGo, the Google AI that shocked the world by mastering the ancient game of Go. With this toolkit, you can build systems that simulate a new breed of robot, play Atari games, and, yes, master the game of Go.

But game-playing is just the beginning. OpenAI is a billion-dollar effort to push AI as far as it will go. In both how the company came together and what it plans to do, you can see the next great wave of innovation forming. We’re a long way from knowing whether OpenAI itself becomes the main agent for that change. But the forces that drove the creation of this rather unusual startup show that the new breed of AI will not only remake technology, but remake the way we build technology.

The article is here.

Tuesday, May 10, 2016

Where do minds belong?

by Caleb Scharf
Aeon
Originally published March 22, 2016

As a species, we humans are awfully obsessed with the future. We love to speculate about where our evolution is taking us. We try to imagine what our technology will be like decades or centuries from now. And we fantasise about encountering intelligent aliens – generally, ones who are far more advanced than we are. Lately those strands have begun to merge. From the evolution side, a number of futurists are predicting the singularity: a time when computers will soon become powerful enough to simulate human consciousness, or absorb it entirely. In parallel, some visionaries propose that any intelligent life we encounter in the rest of the Universe is more likely to be machine-based, rather than humanoid meat-bags such as ourselves.

These ruminations offer a potential solution to the long-debated Fermi Paradox: the seeming absence of intelligent alien life swarming around us, despite the fact that such life seems possible. If machine intelligence is the inevitable end-point of both technology and biology, then perhaps the aliens are hyper-evolved machines so off-the-charts advanced, so far removed from familiar biological forms, that we wouldn’t recognise them if we saw them. Similarly, we can imagine that interstellar machine communication would be so optimised and well-encrypted as to be indistinguishable from noise. In this view, the seeming absence of intelligent life in the cosmos might be an illusion brought about by our own inadequacies.

The article is here.

Wednesday, April 13, 2016

Who is on ethics board that Google set up after buying DeepMind?

Sam Shead
Business Insider
Originally published March 26, 2016

Google's artificial intelligence (AI) ethics board, established when Google acquired London AI startup DeepMind in 2014, remains one of the biggest mysteries in tech, with both Google and DeepMind refusing to reveal who sits on it.

Google set up the board at DeepMind's request after the cofounders of the £400 million research-intensive AI lab said they would only agree to the acquisition if Google promised to look into the ethics of the technology it was buying into.

The article is here.

Tuesday, April 5, 2016

The momentous advance in artificial intelligence demands a new set of ethics

Jason Millar
The Guardian
March 12, 2016

Here is an excerpt:

AI is also increasingly able to manage complex, data intensive tasks, such as monitoring credit card systems for fraudulent behaviour, high-frequency stock trading and detecting cyber security threats. Embodied as robots, deep-learning AI is poised to begin to move and work among us – in the form of service, transportation, medical and military robots.

Deep learning represents a paradigm shift in the relationship humans have with their technological creations. It results in AI that displays genuinely surprising and unpredictable behaviour. Commenting after his first loss, Lee described being stunned by an unconventional move he claimed no human would ever have made. Demis Hassabis, one of DeepMind’s founders, echoed the sentiment: “We’re very pleased that AlphaGo played some quite surprising and beautiful moves.”

Alan Turing, the visionary computer scientist, predicted we would someday speak of machines that think. He never predicted this.

The article is here.

Monday, April 4, 2016

Can we trust robots to make moral decisions?

By Olivia Goldhill
Quartz
Originally published April 3, 2016

Last week, Microsoft inadvertently revealed the difficulty of creating moral robots. Chatbot Tay, designed to speak like a teenage girl, turned into a Nazi-loving racist after less than 24 hours on Twitter. “Repeat after me, Hitler did nothing wrong,” she said, after interacting with various trolls. “Bush did 9/11 and Hitler would have done a better job than the monkey we have got now.”

Of course, Tay wasn’t designed to be explicitly moral. But plenty of other machines are involved in work that has clear ethical implications.

The article is here.