Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, October 13, 2017

Automation on our own terms

Benedict Dellot and Fabian Wallace-Stephens
Medium.com
Originally published September 17, 2017

Here is an excerpt:

There are three main risks of embracing AI and robotics unreservedly:
  1. A rise in economic inequality — To the extent that technology deskills jobs, it will put downward pressure on earnings. If jobs are removed altogether as a result of automation, the result will be greater returns for those who make and deploy the technology, as well as the elite workers left behind in firms. The median OECD country has already seen a decrease in its labour share of income of about 5 percentage points since the early 1990s, with capital’s share swallowing the difference. Another risk here is market concentration. If large firms continue to adopt AI and robotics at a faster rate than small firms, they will gain enormous efficiency advantages and as a result could take excessive share of markets. Automation could lead to oligopolistic markets, where a handful of firms dominate at the expense of others.
  2. A deepening of geographic disparities — Since the computer revolution of the 1980s, cities that specialise in cognitive work have gained a comparative advantage in job creation. In 2014, 5.5 percent of all UK workers operated in new job types that emerged after 1990, but the figure for workers in London was almost double that at 9.8 percent. The ability of cities to attract skilled workers, as well as the diverse nature of their economies, makes them better placed than rural areas to grasp the opportunities of AI and robotics. The most vulnerable locations will be those that are heavily reliant on a single automatable industry, such as parts of the North East that have a large stock of call centre jobs.
  3. An entrenchment of demographic biases — If left untamed, automation could disadvantage some demographic groups. Recall our case study analysis of the retail sector, which suggested that AI and robotics might lead to fewer workers being required in bricks and mortar shops, but more workers being deployed in warehouse operative roles. Given women are more likely to make up the former and men the latter, automation in this case could exacerbate gender pay and job differences. It is also possible that the use of AI in recruitment (e.g. algorithms that screen CVs) could amplify workplace biases and block people from employment based on their age, ethnicity or gender.

Thursday, October 12, 2017

The Data Scientist Putting Ethics In AI

By Poornima Apte
The Daily Dose
Originally published SEPT 25 2017

Here is an excerpt:

Chowdhury’s other personal goal — to make AI accessible to everyone — is noble, but if the technology’s ramifications are not yet fully known, might it not also be dangerous? Doomsday scenarios — AI as the rapacious monster devouring all our jobs — put forward in the media may not be in our immediate futures, but Alexandra Whittington does worry that implicit human biases could make their way into the AI of the future — a problem that might be exacerbated if not accounted for early on, before any democratization of the tools occurs. Whittington is a futurist and foresight director at Fast Future. She points to a recent example of AI in law where the “robot-lawyer” was named Ross, and the legal assistant had a woman’s name, Cara. “You look at Siri and Cortana, they’re women, right?” Whittington says. “But they’re assistants, not the attorney or the accountant.” It’s the whole garbage-in, garbage-out theory, she says, cautioning against an overly idealistic approach toward the technology.

The article is here.

New Theory Cracks Open the Black Box of Deep Learning

Natalie Wolchover
Quanta Magazine
Originally published September 21, 2017

Here is an excerpt:

In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new computer experiments by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during deep learning, at least in the cases they studied.

Tishby’s findings have the AI community buzzing. “I believe that the information bottleneck idea could be very important in future deep neural network research,” said Alex Alemi of Google Research, who has already developed new approximation methods for applying an information bottleneck analysis to large deep neural networks. The bottleneck could serve “not only as a theoretical tool for understanding why our neural networks work as well as they do currently, but also as a tool for constructing new objectives and architectures of networks,” Alemi said.

Some researchers remain skeptical that the theory fully accounts for the success of deep learning, but Kyle Cranmer, a particle physicist at New York University who uses machine learning to analyze particle collisions at the Large Hadron Collider, said that as a general principle of learning, it “somehow smells right.”

The article is here.

Wednesday, October 11, 2017

Moral programming will define the future of autonomous transportation

Josh Althauser
Venture Beat
Originally published September 24, 2017

Here is an excerpt:

First do no harm?

Regardless of public sentiment, driverless cars are coming. Giants like Tesla Motors and Google have already poured billions of dollars into their respective technologies with reasonable success, and Elon Musk has said that we are much closer to a driverless future than most suspect. Robotics software engineers are making strides in self-driving AI at an awe-inspiring (and, for some, alarming) rate.

Beyond our questions of whether we want to hand over the wheel to software, there are deeper, more troubling questions that must be asked. Regardless of current sentiment, driverless cars are on their way. The real questions we should be asking as we edge closer to completely autonomous roadways lie in ethically complex areas. Among these areas of concern, one very difficult question stands out. Should we program driverless cars to kill?

At first, the answer seems obvious. No AI should have the ability to choose to kill a human. We can more easily reconcile death that results from a malfunction of some kind — brakes that give out, a failure of the car’s visual monitoring system, or a bug in the AI’s programmatic makeup. However, defining how and when AI can inflict harm isn’t that simple.

The article is here.

The guide psychologists gave carmakers to convince us it’s safe to buy self-driving cars

Olivia Goldhill
Quartz.com
Originally published September 17, 2017

Driverless cars sound great in theory. They have the potential to save lives, because humans are erratic, distracted, and often bad drivers. Once the technology is perfected, machines will be far better at driving safely.

But in practice, the notion of putting your life into the hands of an autonomous machine—let alone facing one as a pedestrian—is highly unnerving. Three out of four Americans are afraid to get into a self-driving car, an American Automobile Association survey found earlier this year.

Carmakers working to counter those fears and get driverless cars on the road have found an ally in psychologists. In a paper published this week in Nature Human Behavior, three professors from MIT Media Lab, Toulouse School of Economics, and the University of California at Irvine discuss widespread concerns and suggest psychological techniques to help allay them:

Who wants to ride in a car that would kill them to save pedestrians?

First, they address the knotty problem of how self-driving cars will be programmed to respond if they’re in a situation where they must either put their own passenger or a pedestrian at risk. This is a real world version of an ethical dilemma called “The Trolley Problem.”

The article is here.

Tuesday, October 10, 2017

How AI & robotics are transforming social care, retail and the logistics industry

Benedict Dellot and Fabian Wallace-Stephens
RSA.org
Originally published September 18, 2017

Here is an excerpt:

The CHIRON project

CHIRON is a two year project funded by Innovate UK. It strives to design care robotics for the future with a focus on dignity, independence and choice. CHIRON is a set of intelligent modular robotic systems, located in multiple positions around the home. Among its intended uses are to help people with personal hygiene tasks in the morning, get ready for the day, and support them in preparing meals in the kitchen. CHIRON’s various components can be mixed and matched to enable the customer to undertake a wide range of domestic and self-care tasks independently, or to enable a care worker to assist an increased number of customers.

The vision for CHIRON is to move from an ‘end of life’ institutional model, widely regarded as unsustainable and not fit for purpose, to a more dynamic and flexible market that offers people greater choice in the care sector when they require it.

The CHIRON project is being managed by a consortium led by Designability. The key technology partners are Bristol Robotics Laboratory and Shadow Robot Company, who have considerable expertise in conducting pioneering research and development in robotics. Award winning social enterprise care provider, Three Sisters Care will bring user-centred design to the core of the project. Smart Homes & Buildings Association will work to introduce the range of devices that will create CHIRON and make it a valuable presence in people’s homes.

The article is here.

Reasons Probably Won’t Change Your Mind: The Role of Reasons in Revising Moral Decisions

Stanley, M. L., Dougherty, A. M., Yang, B. W., Henne, P., & De Brigard, F. (2017).
Journal of Experimental Psychology: General. Advance online publication.

Abstract

Although many philosophers argue that making and revising moral decisions ought to be a matter of deliberating over reasons, the extent to which the consideration of reasons informs people’s moral decisions and prompts them to change their decisions remains unclear. Here, after making an initial decision in 2-option moral dilemmas, participants examined reasons for only the option initially chosen (affirming reasons), reasons for only the option not initially chosen (opposing reasons), or reasons for both options. Although participants were more likely to change their initial decisions when presented with only opposing reasons compared with only affirming reasons, these effect sizes were consistently small. After evaluating reasons, participants were significantly more likely not to change their initial decisions than to change them, regardless of the set of reasons they considered. The initial decision accounted for most of the variance in predicting the final decision, whereas the reasons evaluated accounted for a relatively small proportion of the variance in predicting the final decision. This resistance to changing moral decisions is at least partly attributable to a biased, motivated evaluation of the available reasons: participants rated the reasons supporting their initial decisions more favorably than the reasons opposing their initial decisions, regardless of the reported strategy used to make the initial decision. Overall, our results suggest that the consideration of reasons rarely induces people to change their initial decisions in moral dilemmas.

The article is here, behind a paywall.

You can contact the lead investigator for a personal copy.

Monday, October 9, 2017

Artificial Human Embryos Are Coming, and No One Knows How to Handle Them

Antonio Regalado
MIT Tech Review
September 19, 2017

Here is an excerpt:

Scientists at Michigan now have plans to manufacture embryoids by the hundreds. These could be used to screen drugs to see which cause birth defects, find others to increase the chance of pregnancy, or to create starting material for lab-generated organs. But ethical and political quarrels may not be far behind. “This is a hot new frontier in both science and bioethics. And it seems likely to remain contested for the coming years,” says Jonathan Kimmelman, a member of the bioethics unit at McGill University, in Montreal, and a leader of an international organization of stem-cell scientists.

What’s really growing in the dish? There no easy answer to that. In fact, no one is even sure what to call these new entities. In March, a team from Harvard University offered the catch-all “synthetic human entities with embryo-like features,” or SHEEFS, in a paper cautioning that “many new varieties” are on the horizon, including realistic mini-brains.

Shao, who is continuing his training at MIT, dug into the ethics question and came to his own conclusions. “Very early on in our research we started to pay attention to why are we doing this? Is it really necessary? We decided yes, we are trying to grow a structure similar to part of the human early embryo that is hard otherwise to study,” says Shao. “But we are not going to generate a complete human embryo. I can’t just consider my feelings. I have to think about society.”

The article is here.

Would We Even Know Moral Bioenhancement If We Saw It?

Wiseman H.
Camb Q Healthc Ethics. 2017;26(3):398-410.

Abstract

The term "moral bioenhancement" conceals a diverse plurality encompassing much potential, some elements of which are desirable, some of which are disturbing, and some of which are simply bland. This article invites readers to take a better differentiated approach to discriminating between elements of the debate rather than talking of moral bioenhancement "per se," or coming to any global value judgments about the idea as an abstract whole (no such whole exists). Readers are then invited to consider the benefits and distortions that come from the usual dichotomies framing the various debates, concluding with an additional distinction for further clarifying this discourse qua explicit/implicit moral bioenhancement.

The article is here, behind a paywall.

Email the author directly for a personal copy.