Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, October 10, 2017

How AI & robotics are transforming social care, retail and the logistics industry

Benedict Dellot and Fabian Wallace-Stephens
RSA.org
Originally published September 18, 2017

Here is an excerpt:

The CHIRON project

CHIRON is a two year project funded by Innovate UK. It strives to design care robotics for the future with a focus on dignity, independence and choice. CHIRON is a set of intelligent modular robotic systems, located in multiple positions around the home. Among its intended uses are to help people with personal hygiene tasks in the morning, get ready for the day, and support them in preparing meals in the kitchen. CHIRON’s various components can be mixed and matched to enable the customer to undertake a wide range of domestic and self-care tasks independently, or to enable a care worker to assist an increased number of customers.

The vision for CHIRON is to move from an ‘end of life’ institutional model, widely regarded as unsustainable and not fit for purpose, to a more dynamic and flexible market that offers people greater choice in the care sector when they require it.

The CHIRON project is being managed by a consortium led by Designability. The key technology partners are Bristol Robotics Laboratory and Shadow Robot Company, who have considerable expertise in conducting pioneering research and development in robotics. Award winning social enterprise care provider, Three Sisters Care will bring user-centred design to the core of the project. Smart Homes & Buildings Association will work to introduce the range of devices that will create CHIRON and make it a valuable presence in people’s homes.

The article is here.

Reasons Probably Won’t Change Your Mind: The Role of Reasons in Revising Moral Decisions

Stanley, M. L., Dougherty, A. M., Yang, B. W., Henne, P., & De Brigard, F. (2017).
Journal of Experimental Psychology: General. Advance online publication.

Abstract

Although many philosophers argue that making and revising moral decisions ought to be a matter of deliberating over reasons, the extent to which the consideration of reasons informs people’s moral decisions and prompts them to change their decisions remains unclear. Here, after making an initial decision in 2-option moral dilemmas, participants examined reasons for only the option initially chosen (affirming reasons), reasons for only the option not initially chosen (opposing reasons), or reasons for both options. Although participants were more likely to change their initial decisions when presented with only opposing reasons compared with only affirming reasons, these effect sizes were consistently small. After evaluating reasons, participants were significantly more likely not to change their initial decisions than to change them, regardless of the set of reasons they considered. The initial decision accounted for most of the variance in predicting the final decision, whereas the reasons evaluated accounted for a relatively small proportion of the variance in predicting the final decision. This resistance to changing moral decisions is at least partly attributable to a biased, motivated evaluation of the available reasons: participants rated the reasons supporting their initial decisions more favorably than the reasons opposing their initial decisions, regardless of the reported strategy used to make the initial decision. Overall, our results suggest that the consideration of reasons rarely induces people to change their initial decisions in moral dilemmas.

The article is here, behind a paywall.

You can contact the lead investigator for a personal copy.

Monday, October 9, 2017

Artificial Human Embryos Are Coming, and No One Knows How to Handle Them

Antonio Regalado
MIT Tech Review
September 19, 2017

Here is an excerpt:

Scientists at Michigan now have plans to manufacture embryoids by the hundreds. These could be used to screen drugs to see which cause birth defects, find others to increase the chance of pregnancy, or to create starting material for lab-generated organs. But ethical and political quarrels may not be far behind. “This is a hot new frontier in both science and bioethics. And it seems likely to remain contested for the coming years,” says Jonathan Kimmelman, a member of the bioethics unit at McGill University, in Montreal, and a leader of an international organization of stem-cell scientists.

What’s really growing in the dish? There no easy answer to that. In fact, no one is even sure what to call these new entities. In March, a team from Harvard University offered the catch-all “synthetic human entities with embryo-like features,” or SHEEFS, in a paper cautioning that “many new varieties” are on the horizon, including realistic mini-brains.

Shao, who is continuing his training at MIT, dug into the ethics question and came to his own conclusions. “Very early on in our research we started to pay attention to why are we doing this? Is it really necessary? We decided yes, we are trying to grow a structure similar to part of the human early embryo that is hard otherwise to study,” says Shao. “But we are not going to generate a complete human embryo. I can’t just consider my feelings. I have to think about society.”

The article is here.

Would We Even Know Moral Bioenhancement If We Saw It?

Wiseman H.
Camb Q Healthc Ethics. 2017;26(3):398-410.

Abstract

The term "moral bioenhancement" conceals a diverse plurality encompassing much potential, some elements of which are desirable, some of which are disturbing, and some of which are simply bland. This article invites readers to take a better differentiated approach to discriminating between elements of the debate rather than talking of moral bioenhancement "per se," or coming to any global value judgments about the idea as an abstract whole (no such whole exists). Readers are then invited to consider the benefits and distortions that come from the usual dichotomies framing the various debates, concluding with an additional distinction for further clarifying this discourse qua explicit/implicit moral bioenhancement.

The article is here, behind a paywall.

Email the author directly for a personal copy.

Sunday, October 8, 2017

Moral outrage in the digital age

Molly J. Crockett
Nature Human Behaviour (2017)
Originally posted September 18, 2017

Moral outrage is an ancient emotion that is now widespread on digital media and online social networks. How might these new technologies change the expression of moral outrage and its social consequences?

Moral outrage is a powerful emotion that motivates people to shame and punish wrongdoers. Moralistic punishment can be a force for good, increasing cooperation by holding bad actors accountable. But punishment also has a dark side — it can exacerbate social conflict by dehumanizing others and escalating into destructive feuds.

Moral outrage is at least as old as civilization itself, but civilization is rapidly changing in the face of new technologies. Worldwide, more than a billion people now spend at least an hour a day on social media, and moral outrage is all the rage online. In recent years, viral online shaming has cost companies millions, candidates elections, and individuals their careers overnight.

As digital media infiltrates our social lives, it is crucial that we understand how this technology might transform the expression of moral outrage and its social consequences. Here, I describe a simple psychological framework for tackling this question (Fig. 1). Moral outrage is triggered by stimuli that call attention to moral norm violations. These stimuli evoke a range of emotional and behavioural responses that vary in their costs and constraints. Finally, expressing outrage leads to a variety of personal and social outcomes. This framework reveals that digital media may exacerbate the expression of moral outrage by inflating its triggering stimuli, reducing some of its costs and amplifying many of its personal benefits.

The article is here.

Saturday, October 7, 2017

Committee on Publication Ethics: Ethical Guidelines for Peer Reviewers

COPE Council.
Ethical guidelines for peer reviewers. 
September 2017. www.publicationethics.org

Peer reviewers play a role in ensuring the integrity of the scholarly record. The peer review
process depends to a large extent on the trust and willing participation of the scholarly
community and requires that everyone involved behaves responsibly and ethically. Peer
reviewers play a central and critical part in the peer review process, but may come to the role
without any guidance and be unaware of their ethical obligations. Journals have an obligation
to provide transparent policies for peer review, and reviewers have an obligation to conduct
reviews in an ethical and accountable manner. Clear communication between the journal
and the reviewers is essential to facilitate consistent, fair and timely review. COPE has heard
cases from its members related to peer review issues and bases these guidelines, in part, on
the collective experience and wisdom of the COPE Forum participants. It is hoped they will
provide helpful guidance to researchers, be a reference for editors and publishers in guiding
their reviewers, and act as an educational resource for institutions in training their students
and researchers.

Peer review, for the purposes of these guidelines, refers to reviews provided on manuscript
submissions to journals, but can also include reviews for other platforms and apply to public
commenting that can occur pre- or post-publication. Reviews of other materials such as
preprints, grants, books, conference proceeding submissions, registered reports (preregistered
protocols), or data will have a similar underlying ethical framework, but the process
will vary depending on the source material and the type of review requested. The model of
peer review will also influence elements of the process.

The guidelines are here.

Trump Administration Rolls Back Birth Control Mandate

Robert Pear, Rebecca R. Ruiz, and Laurie Godstein
The New York Times
Originally published October 6, 2017

The Trump administration on Friday moved to expand the rights of employers to deny women insurance coverage for contraception and issued sweeping guidance on religious freedom that critics said could also erode civil rights protections for lesbian, gay, bisexual and transgender people.

The twin actions, by the Department of Health and Human Services and the Justice Department, were meant to carry out a promise issued by President Trump five months ago, when he declared in the Rose Garden that “we will not allow people of faith to be targeted, bullied or silenced anymore.”

Attorney General Jeff Sessions quoted those words in issuing guidance to federal agencies and prosecutors, instructing them to take the position in court that workers, employers and organizations may claim broad exemptions from nondiscrimination laws on the basis of religious objections.

At the same time, the Department of Health and Human Services issued two rules rolling back a federal requirement that employers must include birth control coverage in their health insurance plans. The rules offer an exemption to any employer that objects to covering contraception services on the basis of sincerely held religious beliefs or moral convictions.

More than 55 million women have access to birth control without co-payments because of the contraceptive coverage mandate, according to a study commissioned by the Obama administration. Under the new regulations, hundreds of thousands of women could lose those benefits.

The article is here.

Italics added.  And, just when the abortion rate was at pre-1973 levels.

Friday, October 6, 2017

AI Research Is in Desperate Need of an Ethical Watchdog

Sophia Chen
Wired Science
Originally published September 18, 2017

About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.” Kosinski has received e-mail death threats.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data & Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go.

Right now, if government-funded scientists want to research humans for a study, the law requires them to get the approval of an ethics committee known as an institutional review board, or IRB. Stanford’s review board approved Kosinski and Wang’s study. But these boards use rules developed 40 years ago for protecting people during real-life interactions, such as drawing blood or conducting interviews. “The regulations were designed for a very specific type of research harm and a specific set of research methods that simply don’t hold for data science,” says Metcalf.

The article is here.

Lawsuit Over a Suicide Points to a Risk of Antidepressants

Roni Caryn Rabin
The New York Times
Originally published September 11, 2017

Here is an excerpt:

The case is a rare instance in which a lawsuit over a suicide involving antidepressants actually went to trial; many such cases are either dismissed or settled out of court, said Brent Wisner, of the law firm Baum Hedlund Aristei Goldman, which represented Ms. Dolin.

The verdict is also unusual because Glaxo, which has asked the court to overturn the verdict or to grant a new trial, no longer sells Paxil in the United States and did not manufacture the generic form of the medication Mr. Dolin was taking. The company argues that it should not be held liable for a pill it did not make.

Concerns about safety have long dogged antidepressants, though many doctors and patients consider the medications lifesavers.

Ever since they were linked to an increase in suicidal behaviors in young people more than a decade ago, all antidepressants, including Paxil, have carried a “black box” warning label, reviewed and approved by the Food and Drug Administration, saying that they increase the risk of suicidal thinking and behavior in children, teens and young adults under age 25.

The warning labels also stipulate that the suicide risk has not been seen in short-term studies in anyone over age 24, but urges close monitoring of all patients initiating drug treatment.

The article is here.