Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, December 31, 2016

The Wright Show: Against Empathy

Robert Wright interviews Paul Bloom on his book "Against Empathy."
The Wright Show
Originally published December 6, 2016


Friday, December 30, 2016

Programmers are having a huge discussion about the unethical and illegal things they’ve been asked to do

Julie Bort
Business Insider
Originally published November 20, 2016

Here is an excerpt:

He pointed out that "there are hints" that developers will increasingly face some real heat in the years to come. He cited Volkswagen America's CEO, Michael Horn, who at first blamed software engineers for the company's emissions cheating scandal during a Congressional hearing, claimed the coders had acted on their own "for whatever reason." Horn later resigned after US prosecutors accused the company of making this decision at the highest levels and then trying to cover it up.

But Martin pointed out, "The weird thing is, it was software developers who wrote that code. It was us. Some programmers wrote cheating code. Do you think they knew? I think they probably knew."

Martin finished with a fire-and-brimstone call to action in which he warned that one day, some software developer will do something that will cause a disaster that kills tens of thousands of people.

But Sourour points out that it's not just about accidentally killing people or deliberately polluting the air. Software has already been used by Wall Street firms to manipulate stock quotes.

The article is here.

The ethics of algorithms: Mapping the debate

Brent Daniel Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, Luciano Floridi
Big Data and Society
DOI: 10.1177/2053951716679679, Dec 2016

Abstract

In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.

The article is here.

Thursday, December 29, 2016

The Tragedy of Biomedical Moral Enhancement

Stefan Schlag
Neuroethics (2016). pp 1-13.
doi:10.1007/s12152-016-9284-5

Abstract

In Unfit for the Future, Ingmar Persson and Julian Savulescu present a challenging argument in favour of biomedical moral enhancement. In light of the existential threats of climate change, insufficient moral capacities of the human species seem to require a cautiously shaped programme of biomedical moral enhancement. The story of the tragedy of the commons creates the impression that climate catastrophe is unavoidable and consequently gives strength to the argument. The present paper analyses to what extent a policy in favour of biomedical moral enhancement can thereby be justified and puts special emphasis on the political context. By reconstructing the theoretical assumptions of the argument and by taking them seriously, it is revealed that the argument is self-defeating. The tragedy of the commons may make moral enhancement appear necessary, but when it comes to its implementation, a second-order collective action-problem emerges and impedes the execution of the idea. The paper examines several modifications of the argument and shows how it can be based on easier enforceability of BME. While this implies enforcement, it is not an obstacle for the justification of BME. Rather, enforceability might be the decisive advantage of BME over other means. To take account of the global character of climate change, the paper closes with an inquiry of possible justifications of enforced BME on a global level. The upshot of the entire line of argumentation is that Unfit for the Future cannot justify BME because it ignores the nature of the problem of climate protection and political prerequisites of any solution.

The article is here.

The True Self: A psychological concept distinct from the self.

Strohminger N., Newman, G., and Knobe, J. (in press).
Perspectives on Psychological Science.

A long tradition of psychological research has explored the distinction between characteristics that are part of the self and those that lie outside of it. Recently, a surge of research has begun examining a further distinction. Even among characteristics that are internal to the self, people pick out a subset as belonging to the true self. These factors are judged as making people who they really are, deep down. In this paper, we introduce the concept of the true self and identify features that distinguish people’s
understanding of the true self from their understanding of the self more generally. In particular, we consider recent findings that the true self is perceived as positive and moral, and that this tendency is actor-observer invariant and cross-culturally stable. We then explore possible explanations for these findings and discuss their implications for a variety of issues in psychology.

The paper is here.

Wednesday, December 28, 2016

Oxytocin modulates third-party sanctioning of selfish and generous behavior within and between groups

Katie Daughters, Antony S.R. Manstead, Femke S. Ten Velden, Carsten K.W. De Dreu
Psychoneuroendocrinology, Available online 3 December 2016

Abstract

Human groups function because members trust each other and reciprocate cooperative contributions, and reward others’ cooperation and punish their non-cooperation. Here we examined the possibility that such third-party punishment and reward of others’ trust and reciprocation is modulated by oxytocin, a neuropeptide generally involved in social bonding and in-group (but not out-group) serving behavior. Healthy males and females (N = 100) self-administered a placebo or 24 IU of oxytocin in a randomized, double-blind, between-subjects design. Participants were asked to indicate (incentivized, costly) their level of reward or punishment for in-group (outgroup) investors donating generously or fairly to in-group (outgroup) trustees, who back-transferred generously, fairly or selfishly. Punishment (reward) was higher for selfish (generous) investments and back-transfers when (i) investors were in-group rather than outgroup, and (ii) trustees were in-group rather than outgroup, especially when (iii) participants received oxytocin rather than placebo. It follows, first, that oxytocin leads individuals to ignore out-groups as long as out-group behavior is not relevant to the in-group and, second, that oxytocin contributes to creating and enforcing in-group norms of cooperation and trust.

The article is here.

Inference of trustworthiness from intuitive moral judgments

Everett JA., Pizarro DA., Crockett MJ.
Journal of Experimental Psychology: General, Vol 145(6), Jun 2016, 772-787.

Moral judgments play a critical role in motivating and enforcing human cooperation, and research on the proximate mechanisms of moral judgments highlights the importance of intuitive, automatic processes in forming such judgments. Intuitive moral judgments often share characteristics with deontological theories in normative ethics, which argue that certain acts (such as killing) are absolutely wrong, regardless of their consequences. Why do moral intuitions typically follow deontological prescriptions, as opposed to those of other ethical theories? Here, we test a functional explanation for this phenomenon by investigating whether agents who express deontological moral judgments are more valued as social partners. Across 5 studies, we show that people who make characteristically deontological judgments are preferred as social partners, perceived as more moral and trustworthy, and are trusted more in economic games. These findings provide empirical support for a partner choice account of moral intuitions whereby typically deontological judgments confer an adaptive function by increasing a person's likelihood of being chosen as a cooperation partner. Therefore, deontological moral intuitions may represent an evolutionarily prescribed prior that was selected for through partner choice mechanisms.

The article is here.

Tuesday, December 27, 2016

Is Addiction a Brain Disease?

Kent C. Berridge
Neuroethics (2016). pp 1-5.
doi:10.1007/s12152-016-9286-3

Abstract

Where does normal brain or psychological function end, and pathology begin? The line can be hard to discern, making disease sometimes a tricky word. In addiction, normal ‘wanting’ processes become distorted and excessive, according to the incentive-sensitization theory. Excessive ‘wanting’ results from drug-induced neural sensitization changes in underlying brain mesolimbic systems of incentive. ‘Brain disease’ was never used by the theory, but neural sensitization changes are arguably extreme enough and problematic enough to be called pathological. This implies that ‘brain disease’ can be a legitimate description of addiction, though caveats are needed to acknowledge roles for choice and active agency by the addict. Finally, arguments over ‘brain disease’ should be put behind us. Our real challenge is to understand addiction and devise better ways to help. Arguments over descriptive words only distract from that challenge.

The article is here.

Artificial moral agents: creative, autonomous and social. An approach based on evolutionary computation

Ioan Muntean and Don Howard
Frontiers in Artificial Intelligence and Applications
Volume 273: Sociable Robots and the Future of Social Relations

Abstract

In this paper we propose a model of artificial normative agency that accommodates some social competencies that we expect from artificial moral agents. The artificial moral agent (AMA) discussed here is based on two components: (i) a version of virtue ethics of human agents (VE) adapted to artificial agents, called here “virtual virtue ethics” (VVE); and (ii) an implementation based on evolutionary computation (EC), more concretely genetic algorithms. The reasons to choose VVE and EC are related to two elements that are, we argue, central to any approach to artificial morality: autonomy and creativity. The greater the autonomy an artificial agent has, the more it needs moral standards. In the virtue ethics, each agent builds her own character in time; creativity comes in degrees as the individual becomes morally competent. The model of an autonomous and creative AMA thus implemented is called GAMA= Genetic(-inspired) Autonomous Moral Agent. First, unlike the majority of other implementations of machine ethics, our model is more agent-centered, than action-centered; it emphasizes the developmental and behavioral aspects of the ethical agent. Second, in our model, the AMA does not make decisions exclusively and directly by following rules or by calculating the best outcome of an action. The model incorporates rules as initial data (as the initial population of the genetic algorithms) or as correction factors, but not as the main structure of the algorithm. Third, our computational model is less conventional, or at least it does not fall within the Turing tradition in computation. Genetic algorithms are excellent searching tools that avoid local minima and generate solutions based on previous results. In the GAMA model, only prospective at this stage, the VVE approach to ethics is better implemented by EC. Finally, the GAMA agents can display sociability through competition among the best moral actions and the desire to win the competition. Both VVE and EC are more adequate to a “social approach” to AMA when compared to the standard approaches. The GAMA is more promising a “moral and social artificial agent”.

The article is here.