Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, December 29, 2016

The Tragedy of Biomedical Moral Enhancement

Stefan Schlag
Neuroethics (2016). pp 1-13.
doi:10.1007/s12152-016-9284-5

Abstract

In Unfit for the Future, Ingmar Persson and Julian Savulescu present a challenging argument in favour of biomedical moral enhancement. In light of the existential threats of climate change, insufficient moral capacities of the human species seem to require a cautiously shaped programme of biomedical moral enhancement. The story of the tragedy of the commons creates the impression that climate catastrophe is unavoidable and consequently gives strength to the argument. The present paper analyses to what extent a policy in favour of biomedical moral enhancement can thereby be justified and puts special emphasis on the political context. By reconstructing the theoretical assumptions of the argument and by taking them seriously, it is revealed that the argument is self-defeating. The tragedy of the commons may make moral enhancement appear necessary, but when it comes to its implementation, a second-order collective action-problem emerges and impedes the execution of the idea. The paper examines several modifications of the argument and shows how it can be based on easier enforceability of BME. While this implies enforcement, it is not an obstacle for the justification of BME. Rather, enforceability might be the decisive advantage of BME over other means. To take account of the global character of climate change, the paper closes with an inquiry of possible justifications of enforced BME on a global level. The upshot of the entire line of argumentation is that Unfit for the Future cannot justify BME because it ignores the nature of the problem of climate protection and political prerequisites of any solution.

The article is here.

The True Self: A psychological concept distinct from the self.

Strohminger N., Newman, G., and Knobe, J. (in press).
Perspectives on Psychological Science.

A long tradition of psychological research has explored the distinction between characteristics that are part of the self and those that lie outside of it. Recently, a surge of research has begun examining a further distinction. Even among characteristics that are internal to the self, people pick out a subset as belonging to the true self. These factors are judged as making people who they really are, deep down. In this paper, we introduce the concept of the true self and identify features that distinguish people’s
understanding of the true self from their understanding of the self more generally. In particular, we consider recent findings that the true self is perceived as positive and moral, and that this tendency is actor-observer invariant and cross-culturally stable. We then explore possible explanations for these findings and discuss their implications for a variety of issues in psychology.

The paper is here.

Wednesday, December 28, 2016

Oxytocin modulates third-party sanctioning of selfish and generous behavior within and between groups

Katie Daughters, Antony S.R. Manstead, Femke S. Ten Velden, Carsten K.W. De Dreu
Psychoneuroendocrinology, Available online 3 December 2016

Abstract

Human groups function because members trust each other and reciprocate cooperative contributions, and reward others’ cooperation and punish their non-cooperation. Here we examined the possibility that such third-party punishment and reward of others’ trust and reciprocation is modulated by oxytocin, a neuropeptide generally involved in social bonding and in-group (but not out-group) serving behavior. Healthy males and females (N = 100) self-administered a placebo or 24 IU of oxytocin in a randomized, double-blind, between-subjects design. Participants were asked to indicate (incentivized, costly) their level of reward or punishment for in-group (outgroup) investors donating generously or fairly to in-group (outgroup) trustees, who back-transferred generously, fairly or selfishly. Punishment (reward) was higher for selfish (generous) investments and back-transfers when (i) investors were in-group rather than outgroup, and (ii) trustees were in-group rather than outgroup, especially when (iii) participants received oxytocin rather than placebo. It follows, first, that oxytocin leads individuals to ignore out-groups as long as out-group behavior is not relevant to the in-group and, second, that oxytocin contributes to creating and enforcing in-group norms of cooperation and trust.

The article is here.

Inference of trustworthiness from intuitive moral judgments

Everett JA., Pizarro DA., Crockett MJ.
Journal of Experimental Psychology: General, Vol 145(6), Jun 2016, 772-787.

Moral judgments play a critical role in motivating and enforcing human cooperation, and research on the proximate mechanisms of moral judgments highlights the importance of intuitive, automatic processes in forming such judgments. Intuitive moral judgments often share characteristics with deontological theories in normative ethics, which argue that certain acts (such as killing) are absolutely wrong, regardless of their consequences. Why do moral intuitions typically follow deontological prescriptions, as opposed to those of other ethical theories? Here, we test a functional explanation for this phenomenon by investigating whether agents who express deontological moral judgments are more valued as social partners. Across 5 studies, we show that people who make characteristically deontological judgments are preferred as social partners, perceived as more moral and trustworthy, and are trusted more in economic games. These findings provide empirical support for a partner choice account of moral intuitions whereby typically deontological judgments confer an adaptive function by increasing a person's likelihood of being chosen as a cooperation partner. Therefore, deontological moral intuitions may represent an evolutionarily prescribed prior that was selected for through partner choice mechanisms.

The article is here.

Tuesday, December 27, 2016

Is Addiction a Brain Disease?

Kent C. Berridge
Neuroethics (2016). pp 1-5.
doi:10.1007/s12152-016-9286-3

Abstract

Where does normal brain or psychological function end, and pathology begin? The line can be hard to discern, making disease sometimes a tricky word. In addiction, normal ‘wanting’ processes become distorted and excessive, according to the incentive-sensitization theory. Excessive ‘wanting’ results from drug-induced neural sensitization changes in underlying brain mesolimbic systems of incentive. ‘Brain disease’ was never used by the theory, but neural sensitization changes are arguably extreme enough and problematic enough to be called pathological. This implies that ‘brain disease’ can be a legitimate description of addiction, though caveats are needed to acknowledge roles for choice and active agency by the addict. Finally, arguments over ‘brain disease’ should be put behind us. Our real challenge is to understand addiction and devise better ways to help. Arguments over descriptive words only distract from that challenge.

The article is here.

Artificial moral agents: creative, autonomous and social. An approach based on evolutionary computation

Ioan Muntean and Don Howard
Frontiers in Artificial Intelligence and Applications
Volume 273: Sociable Robots and the Future of Social Relations

Abstract

In this paper we propose a model of artificial normative agency that accommodates some social competencies that we expect from artificial moral agents. The artificial moral agent (AMA) discussed here is based on two components: (i) a version of virtue ethics of human agents (VE) adapted to artificial agents, called here “virtual virtue ethics” (VVE); and (ii) an implementation based on evolutionary computation (EC), more concretely genetic algorithms. The reasons to choose VVE and EC are related to two elements that are, we argue, central to any approach to artificial morality: autonomy and creativity. The greater the autonomy an artificial agent has, the more it needs moral standards. In the virtue ethics, each agent builds her own character in time; creativity comes in degrees as the individual becomes morally competent. The model of an autonomous and creative AMA thus implemented is called GAMA= Genetic(-inspired) Autonomous Moral Agent. First, unlike the majority of other implementations of machine ethics, our model is more agent-centered, than action-centered; it emphasizes the developmental and behavioral aspects of the ethical agent. Second, in our model, the AMA does not make decisions exclusively and directly by following rules or by calculating the best outcome of an action. The model incorporates rules as initial data (as the initial population of the genetic algorithms) or as correction factors, but not as the main structure of the algorithm. Third, our computational model is less conventional, or at least it does not fall within the Turing tradition in computation. Genetic algorithms are excellent searching tools that avoid local minima and generate solutions based on previous results. In the GAMA model, only prospective at this stage, the VVE approach to ethics is better implemented by EC. Finally, the GAMA agents can display sociability through competition among the best moral actions and the desire to win the competition. Both VVE and EC are more adequate to a “social approach” to AMA when compared to the standard approaches. The GAMA is more promising a “moral and social artificial agent”.

The article is here.

Monday, December 26, 2016

Changing Memories: Between Ethics and Speculation

Eric Racine and William Affleck
AMA Journal of Ethics. December 2016, Volume 18, Number 12: 1241-1248.
doi: 10.1001/journalofethics.2016.18.12.sect1-1612.

Abstract

Over the past decade, a debate has emerged between those who believe that memory-modulating technologies are inherently dangerous and need to be regulated and those who believe these technologies present minimal risk and thus view concerns about their use as far-fetched and alarmist. This article tackles three questions central to this debate: (1) Do these technologies jeopardize personhood? (2) Are the risks of these technologies acceptable? (3) Do these technologies require special regulation or oversight? Although concerns about the unethical use of memory-modulating technologies are legitimate, these concerns should not override the responsible use of memory-modulating technologies in clinical contexts. Accordingly, we call for careful comparative analysis of their use on a case-by-case basis.

The article is here.

Reframing Research Ethics: Towards a Professional Ethics for the Social Sciences

Nathan Emmerich
Sociological Research Online, 21 (4), 7
DOI: 10.5153/sro.4127

Abstract

This article is premised on the idea that were we able to articulate a positive vision of the social scientist's professional ethics, this would enable us to reframe social science research ethics as something internal to the profession. As such, rather than suffering under the imperialism of a research ethics constructed for the purposes of governing biomedical research, social scientists might argue for ethical self-regulation with greater force. I seek to provide the requisite basis for such an 'ethics' by, first, suggesting that the conditions which gave rise to biomedical research ethics are not replicated within the social sciences. Second, I argue that social science research can be considered as the moral equivalent of the 'true professions.' Not only does it have an ultimate end, but it is one that is – or, at least, should be – shared by the state and society as a whole. I then present a reading of confidentiality as a methodological – and not simply ethical – aspect of research, one that offers further support for the view that social scientists should attend to their professional ethics and the internal standards of their disciplines, rather than the contemporary discourse of research ethics that is rooted in the bioethical literature. Finally, and by way of a conclusion, I consider the consequences of the idea that social scientists should adopt a professional ethics and propose that the Clinical Ethics Committee might provide an alternative model for the governance of social science research.

The article is here.

Sunday, December 25, 2016

Excerpt from Stanley Kubrick's Playboy Interview 1968

Playboy, 1968

Playboy: If life is so purposeless, do you feel it’s worth living?

Kubrick: Yes, for those who manage somehow to cope with our mortality. The very meaninglessness of life forces a man to create his own meaning. Children, of course, begin life with an untarnished sense of wonder, a capacity to experience total joy at something as simple as the greenness of a leaf; but as they grow older, the awareness of death and decay begins to impinge on their consciousness and subtly erode their joie de vivre (a keen enjoyment of living), their idealism - and their assumption of immortality.

As a child matures, he sees death and pain everywhere about him, and begins to lose faith in the ultimate goodness of man. But if he’s reasonably strong - and lucky - he can emerge from this twilight of the soul into a rebirth of life’s élan (enthusiastic and assured vigour and liveliness).

Both because of and in spite of his awareness of the meaninglessness of life, he can forge a fresh sense of purpose and affirmation. He may not recapture the same pure sense of wonder he was born with, but he can shape something far more enduring and sustaining.

The most terrifying fact about the universe is not that it is hostile but that it is indifferent; but if we can come to terms with this indifference and accept the challenges of life within the boundaries of death - however mutable man may be able to make them - our existence as a species can have genuine meaning and fulfilment. However vast the darkness, we must supply our own light.

The entire interview is here.