Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, October 12, 2017

New Theory Cracks Open the Black Box of Deep Learning

Natalie Wolchover
Quanta Magazine
Originally published September 21, 2017

Here is an excerpt:

In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new computer experiments by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during deep learning, at least in the cases they studied.

Tishby’s findings have the AI community buzzing. “I believe that the information bottleneck idea could be very important in future deep neural network research,” said Alex Alemi of Google Research, who has already developed new approximation methods for applying an information bottleneck analysis to large deep neural networks. The bottleneck could serve “not only as a theoretical tool for understanding why our neural networks work as well as they do currently, but also as a tool for constructing new objectives and architectures of networks,” Alemi said.

Some researchers remain skeptical that the theory fully accounts for the success of deep learning, but Kyle Cranmer, a particle physicist at New York University who uses machine learning to analyze particle collisions at the Large Hadron Collider, said that as a general principle of learning, it “somehow smells right.”

The article is here.

Wednesday, October 11, 2017

Moral programming will define the future of autonomous transportation

Josh Althauser
Venture Beat
Originally published September 24, 2017

Here is an excerpt:

First do no harm?

Regardless of public sentiment, driverless cars are coming. Giants like Tesla Motors and Google have already poured billions of dollars into their respective technologies with reasonable success, and Elon Musk has said that we are much closer to a driverless future than most suspect. Robotics software engineers are making strides in self-driving AI at an awe-inspiring (and, for some, alarming) rate.

Beyond our questions of whether we want to hand over the wheel to software, there are deeper, more troubling questions that must be asked. Regardless of current sentiment, driverless cars are on their way. The real questions we should be asking as we edge closer to completely autonomous roadways lie in ethically complex areas. Among these areas of concern, one very difficult question stands out. Should we program driverless cars to kill?

At first, the answer seems obvious. No AI should have the ability to choose to kill a human. We can more easily reconcile death that results from a malfunction of some kind — brakes that give out, a failure of the car’s visual monitoring system, or a bug in the AI’s programmatic makeup. However, defining how and when AI can inflict harm isn’t that simple.

The article is here.

The guide psychologists gave carmakers to convince us it’s safe to buy self-driving cars

Olivia Goldhill
Quartz.com
Originally published September 17, 2017

Driverless cars sound great in theory. They have the potential to save lives, because humans are erratic, distracted, and often bad drivers. Once the technology is perfected, machines will be far better at driving safely.

But in practice, the notion of putting your life into the hands of an autonomous machine—let alone facing one as a pedestrian—is highly unnerving. Three out of four Americans are afraid to get into a self-driving car, an American Automobile Association survey found earlier this year.

Carmakers working to counter those fears and get driverless cars on the road have found an ally in psychologists. In a paper published this week in Nature Human Behavior, three professors from MIT Media Lab, Toulouse School of Economics, and the University of California at Irvine discuss widespread concerns and suggest psychological techniques to help allay them:

Who wants to ride in a car that would kill them to save pedestrians?

First, they address the knotty problem of how self-driving cars will be programmed to respond if they’re in a situation where they must either put their own passenger or a pedestrian at risk. This is a real world version of an ethical dilemma called “The Trolley Problem.”

The article is here.

Tuesday, October 10, 2017

How AI & robotics are transforming social care, retail and the logistics industry

Benedict Dellot and Fabian Wallace-Stephens
RSA.org
Originally published September 18, 2017

Here is an excerpt:

The CHIRON project

CHIRON is a two year project funded by Innovate UK. It strives to design care robotics for the future with a focus on dignity, independence and choice. CHIRON is a set of intelligent modular robotic systems, located in multiple positions around the home. Among its intended uses are to help people with personal hygiene tasks in the morning, get ready for the day, and support them in preparing meals in the kitchen. CHIRON’s various components can be mixed and matched to enable the customer to undertake a wide range of domestic and self-care tasks independently, or to enable a care worker to assist an increased number of customers.

The vision for CHIRON is to move from an ‘end of life’ institutional model, widely regarded as unsustainable and not fit for purpose, to a more dynamic and flexible market that offers people greater choice in the care sector when they require it.

The CHIRON project is being managed by a consortium led by Designability. The key technology partners are Bristol Robotics Laboratory and Shadow Robot Company, who have considerable expertise in conducting pioneering research and development in robotics. Award winning social enterprise care provider, Three Sisters Care will bring user-centred design to the core of the project. Smart Homes & Buildings Association will work to introduce the range of devices that will create CHIRON and make it a valuable presence in people’s homes.

The article is here.

Reasons Probably Won’t Change Your Mind: The Role of Reasons in Revising Moral Decisions

Stanley, M. L., Dougherty, A. M., Yang, B. W., Henne, P., & De Brigard, F. (2017).
Journal of Experimental Psychology: General. Advance online publication.

Abstract

Although many philosophers argue that making and revising moral decisions ought to be a matter of deliberating over reasons, the extent to which the consideration of reasons informs people’s moral decisions and prompts them to change their decisions remains unclear. Here, after making an initial decision in 2-option moral dilemmas, participants examined reasons for only the option initially chosen (affirming reasons), reasons for only the option not initially chosen (opposing reasons), or reasons for both options. Although participants were more likely to change their initial decisions when presented with only opposing reasons compared with only affirming reasons, these effect sizes were consistently small. After evaluating reasons, participants were significantly more likely not to change their initial decisions than to change them, regardless of the set of reasons they considered. The initial decision accounted for most of the variance in predicting the final decision, whereas the reasons evaluated accounted for a relatively small proportion of the variance in predicting the final decision. This resistance to changing moral decisions is at least partly attributable to a biased, motivated evaluation of the available reasons: participants rated the reasons supporting their initial decisions more favorably than the reasons opposing their initial decisions, regardless of the reported strategy used to make the initial decision. Overall, our results suggest that the consideration of reasons rarely induces people to change their initial decisions in moral dilemmas.

The article is here, behind a paywall.

You can contact the lead investigator for a personal copy.

Monday, October 9, 2017

Artificial Human Embryos Are Coming, and No One Knows How to Handle Them

Antonio Regalado
MIT Tech Review
September 19, 2017

Here is an excerpt:

Scientists at Michigan now have plans to manufacture embryoids by the hundreds. These could be used to screen drugs to see which cause birth defects, find others to increase the chance of pregnancy, or to create starting material for lab-generated organs. But ethical and political quarrels may not be far behind. “This is a hot new frontier in both science and bioethics. And it seems likely to remain contested for the coming years,” says Jonathan Kimmelman, a member of the bioethics unit at McGill University, in Montreal, and a leader of an international organization of stem-cell scientists.

What’s really growing in the dish? There no easy answer to that. In fact, no one is even sure what to call these new entities. In March, a team from Harvard University offered the catch-all “synthetic human entities with embryo-like features,” or SHEEFS, in a paper cautioning that “many new varieties” are on the horizon, including realistic mini-brains.

Shao, who is continuing his training at MIT, dug into the ethics question and came to his own conclusions. “Very early on in our research we started to pay attention to why are we doing this? Is it really necessary? We decided yes, we are trying to grow a structure similar to part of the human early embryo that is hard otherwise to study,” says Shao. “But we are not going to generate a complete human embryo. I can’t just consider my feelings. I have to think about society.”

The article is here.

Would We Even Know Moral Bioenhancement If We Saw It?

Wiseman H.
Camb Q Healthc Ethics. 2017;26(3):398-410.

Abstract

The term "moral bioenhancement" conceals a diverse plurality encompassing much potential, some elements of which are desirable, some of which are disturbing, and some of which are simply bland. This article invites readers to take a better differentiated approach to discriminating between elements of the debate rather than talking of moral bioenhancement "per se," or coming to any global value judgments about the idea as an abstract whole (no such whole exists). Readers are then invited to consider the benefits and distortions that come from the usual dichotomies framing the various debates, concluding with an additional distinction for further clarifying this discourse qua explicit/implicit moral bioenhancement.

The article is here, behind a paywall.

Email the author directly for a personal copy.

Sunday, October 8, 2017

Moral outrage in the digital age

Molly J. Crockett
Nature Human Behaviour (2017)
Originally posted September 18, 2017

Moral outrage is an ancient emotion that is now widespread on digital media and online social networks. How might these new technologies change the expression of moral outrage and its social consequences?

Moral outrage is a powerful emotion that motivates people to shame and punish wrongdoers. Moralistic punishment can be a force for good, increasing cooperation by holding bad actors accountable. But punishment also has a dark side — it can exacerbate social conflict by dehumanizing others and escalating into destructive feuds.

Moral outrage is at least as old as civilization itself, but civilization is rapidly changing in the face of new technologies. Worldwide, more than a billion people now spend at least an hour a day on social media, and moral outrage is all the rage online. In recent years, viral online shaming has cost companies millions, candidates elections, and individuals their careers overnight.

As digital media infiltrates our social lives, it is crucial that we understand how this technology might transform the expression of moral outrage and its social consequences. Here, I describe a simple psychological framework for tackling this question (Fig. 1). Moral outrage is triggered by stimuli that call attention to moral norm violations. These stimuli evoke a range of emotional and behavioural responses that vary in their costs and constraints. Finally, expressing outrage leads to a variety of personal and social outcomes. This framework reveals that digital media may exacerbate the expression of moral outrage by inflating its triggering stimuli, reducing some of its costs and amplifying many of its personal benefits.

The article is here.

Saturday, October 7, 2017

Committee on Publication Ethics: Ethical Guidelines for Peer Reviewers

COPE Council.
Ethical guidelines for peer reviewers. 
September 2017. www.publicationethics.org

Peer reviewers play a role in ensuring the integrity of the scholarly record. The peer review
process depends to a large extent on the trust and willing participation of the scholarly
community and requires that everyone involved behaves responsibly and ethically. Peer
reviewers play a central and critical part in the peer review process, but may come to the role
without any guidance and be unaware of their ethical obligations. Journals have an obligation
to provide transparent policies for peer review, and reviewers have an obligation to conduct
reviews in an ethical and accountable manner. Clear communication between the journal
and the reviewers is essential to facilitate consistent, fair and timely review. COPE has heard
cases from its members related to peer review issues and bases these guidelines, in part, on
the collective experience and wisdom of the COPE Forum participants. It is hoped they will
provide helpful guidance to researchers, be a reference for editors and publishers in guiding
their reviewers, and act as an educational resource for institutions in training their students
and researchers.

Peer review, for the purposes of these guidelines, refers to reviews provided on manuscript
submissions to journals, but can also include reviews for other platforms and apply to public
commenting that can occur pre- or post-publication. Reviews of other materials such as
preprints, grants, books, conference proceeding submissions, registered reports (preregistered
protocols), or data will have a similar underlying ethical framework, but the process
will vary depending on the source material and the type of review requested. The model of
peer review will also influence elements of the process.

The guidelines are here.