Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Enhancement. Show all posts
Showing posts with label Moral Enhancement. Show all posts

Tuesday, October 26, 2021

The Fragility of Moral Traits to Technological Interventions

J. Fabiano
Neuroethics 14, 269–281 (2021). 
https://doi.org/10.1007/s12152-020-09452-6

Abstract

I will argue that deep moral enhancement is relatively prone to unexpected consequences. I first argue that even an apparently straightforward example of moral enhancement such as increasing human co-operation could plausibly lead to unexpected harmful effects. Secondly, I generalise the example and argue that technological intervention on individual moral traits will often lead to paradoxical effects on the group level. Thirdly, I contend that insofar as deep moral enhancement targets higher-order desires (desires to desire something), it is prone to be self-reinforcing and irreversible. Fourthly, I argue that the complex causal history of moral traits, with its relatively high frequency of contingencies, indicates their fragility. Finally, I conclude that attempts at deep moral enhancement pose greater risks than other enhancement technologies. For example, one of the major problems that moral enhancement is hoped to address is lack of co-operation between groups. If humanity developed and distributed a drug that dramatically increased co-operation between individuals, we would likely see a paradoxical decrease in co-operation between groups and a self-reinforcing increase in the disposition to engage in further modifications – both of which are potential problems.

Conclusion: Fragility Leads to Increased Risks 

Any substantial technological modification of moral traits would be more likely to cause harm than benefit. Moral traits have a particularly high proclivity to unexpected disturbances, as exemplified by the co-operation case, amplified by its self-reinforcing and irreversible nature and finally as their complex aetiology would lead one to suspect. Even the most seemingly simple improvement, if only slightly mistaken, is likely to lead to significant negative outcomes. Unless we produce an almost perfectly calibrated deep moral enhancement, its implementation will carry large risks. Deep moral enhancement is likely to be hard to develop safely, but not necessarily be impossible or undesirable. Given that deep moral enhancement could prevent extreme risks for humanity, in particular decreasing the risk of human extinction, it might as well be the case that we still should attempt to develop it. I am not claiming that our current traits are well suited to dealing with global problems. On the contrary, there are certainly reasons to expect that there are better traits that could be brought about by enhancement technologies. However, I believe my arguments indicate there are also much worse, more socially disruptive, traits accessible through technological intervention.

Monday, August 30, 2021

Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?

Lara, F. 
Sci Eng Ethics 27, 42 (2021). 
https://doi.org/10.1007/s11948-021-00318-5

Abstract

Can Artificial Intelligence (AI) be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these proposals, this article proposes a virtual assistant that, through dialogue, neutrality and virtual reality technologies, can teach users to make better moral decisions on their own. The author concludes that, as long as certain precautions are taken in its design, such an assistant could do this better than a human instructor adopting the same educational methodology.

From the Conclusion

The key in moral education is that it be pursued while respecting and promoting personal autonomy. Educators should avoid the mistake of limiting the capacities of individuals to freely and reflectively determine their own values by attempting to enhance their behaviour directly. On the contrary, they must do what they can to ensure that those being educated, at least at an advanced age, actively participate in this process in order to assume the values that will define them and give meaning to their lives. The problem with current proposals for moral enhancement through new technologies is that they treat the subject of their interventions as a "passive recipient". Moral bioenhancement does so because it aims to change the motivation of the individual by bypassing the reflection and gradual assimilation of values that should accompany any adoption of new identity traits. This constitutes a passivity that would also occur in proposals for moral AIenhancement based on ethical machines that either replace humans in decision-making, or surreptitiously direct them to do the right thing, or simply advise them based on their own supposedly undisputed values.

Friday, November 22, 2019

Artificial Intelligence as a Socratic Assistant for Moral Enhancement

Lara, F. & Deckers, J.
Neuroethics (2019).
https://doi.org/10.1007/s12152-019-09401-y

Abstract

The moral enhancement of human beings is a constant theme in the history of humanity. Today, faced with the threats of a new, globalised world, concern over this matter is more pressing. For this reason, the use of biotechnology to make human beings more moral has been considered. However, this approach is dangerous and very controversial. The purpose of this article is to argue that the use of another new technology, AI, would be preferable to achieve this goal. Whilst several proposals have been made on how to use AI for moral enhancement, we present an alternative that we argue to be superior to other proposals that have been developed.

(cut)

Here is a portion of the Conclusion

Given our incomplete current knowledge of the biological determinants of moral behaviour and of the use of biotechnology to safely influence such determinants, it is reckless to defend moral bioenhancement, even if it were voluntary. However, the age-old human desire to be morally better must be taken very seriously in a globalised world where local decisions can have far-reaching consequences and where moral corruption threatens the survival of us all. This situation forces us to seek the satisfaction of that desire by means of other technologies. AI could, in principle, be a good option. Since it does not intervene directly in our biology, it can, in principle, be less dangerous and controversial.

However, we argued that it also carries risks. For the exhaustive project, these include the capitulation of human decision-making to machines that we may not understand and the negation of what makes us ethical human beings. We argued also that even some auxiliary projects that do not promote the surrendering of human decision-making, for example systems that foster decision-making on the basis of moral agents’ own values, may jeopardise the development of our moral capacities if they focus too much on outcomes, thus providing insufficient opportunities for individuals to be critical of their values and of the processes by which outcomes are produced, which are essential factors for personal moral progress and for rapprochement between different individuals’ positions.

Saturday, July 27, 2019

An Obligation to Enhance?

Anton Vedder
Topoi 2019; 38 (1) pp. 49-52. Available at SSRN: https://ssrn.com/abstract=3407867

Abstract

This article discusses some rather formal characteristics of possible obligations to enhance. Obligations to enhance can exist in the absence of good moral reasons. If obligation and duty however are considered as synonyms, the enhancement involved must be morally desirable in some respect. Since enhancers and enhanced can, but need not coincide, advertency is appropriate regarding the question who exactly is addressed by an obligation or a duty to enhance: the person on whom the enhancing treatment is performed, or the controller or the operator of the enhancement. Especially, the position of the operator is easily overlooked. The exact functionality of the specific enhancement, is all-important, not only for the acceptability of a specific form of enhancement, but also for its chances of success for becoming a duty or morally obligatory. Finally and most importantly, however, since obligations can exist without good moral reasons, there can be obligations to enhance that are not morally right, let alone desirable.

From the Conclusion:

Obligations to enhance can exist in the presence and in the absence of good moral reasons for them. Obligations are based on preceding promises, agreements or regulatory arrangements; they do not necessarily coincide with moral duties. The existence of such obligations therefore need not be morally desirable. If obligation and duty are considered as synonyms, the enhancement involved must be morally desirable in some respect. Since enhancers and enhanced can, but need not coincide, advertency is appropriate regarding the question who exactly is addressed by an obligation or a duty to enhance: the person on whom the enhancing treatment is performed, or the controller or the operator of the enhancement? Especially, the position of the operator is easily overlooked. Finally, the exact functionality of the specific enhancement, is all-important, not only for the acceptability of a specific form of enhancement, but also for its chances of success for becoming a duty or morally obligatory. 

Sunday, October 28, 2018

Moral enhancement and the good life

Hazem Zohny
Med Health Care and Philos (2018).
https://doi.org/10.1007/s11019-018-9868-4

Abstract

One approach to defining enhancement is in the form of bodily or mental changes that tend to improve a person’s well-being. Such a “welfarist account”, however, seems to conflict with moral enhancement: consider an intervention that improves someone’s moral motives but which ultimately diminishes their well-being. According to the welfarist account, this would not be an instance of enhancement—in fact, as I argue, it would count as a disability. This seems to pose a serious limitation for the account. Here, I elaborate on this limitation and argue that, despite it, there is a crucial role for such a welfarist account to play in our practical deliberations about moral enhancement. I do this by exploring four scenarios where a person’s motives are improved at the cost of their well-being. A framework emerges from these scenarios which can clarify disagreements about moral enhancement and help sharpen arguments for and against it.

The article is here.

Wednesday, July 11, 2018

Could Moral Enhancement Interventions be Medically Indicated?

Sarah Carter
Health Care Analysis
December 2017, Volume 25, Issue 4, pp 338–353

Abstract

This paper explores the position that moral enhancement interventions could be medically indicated (and so considered therapeutic) in cases where they provide a remedy for a lack of empathy, when such a deficit is considered pathological. In order to argue this claim, the question as to whether a deficit of empathy could be considered to be pathological is examined, taking into account the difficulty of defining illness and disorder generally, and especially in the case of mental health. Following this, Psychopathy and a fictionalised mental disorder (Moral Deficiency Disorder) are explored with a view to consider moral enhancement techniques as possible treatments for both conditions. At this juncture, having asserted and defended the position that moral enhancement interventions could, under certain circumstances, be considered medically indicated, this paper then goes on to briefly explore some of the consequences of this assertion. First, it is acknowledged that this broadening of diagnostic criteria in light of new interventions could fall foul of claims of medicalisation. It is then briefly noted that considering moral enhancement technologies to be akin to therapies in certain circumstances could lead to ethical and legal consequences and questions, such as those regarding regulation, access, and even consent.

The paper is here.

Thursday, November 16, 2017

Moral Hard-Wiring and Moral Enhancement

Introduction

In a series of papers (Persson & Savulescu 2008; 2010; 2011a; 2012a; 2013; 2014a) and book (Persson & Savulescu 2012b), we have argued that there is an urgent need to pursue research into the possibility of moral enhancement by biomedical means – e.g. by pharmaceuticals, non-invasive brain stimulation, genetic modification or other means directly modifying biology. The present time brings existential threats which human moral psychology, with its cognitive and moral limitations and biases, is unfit to address.  Exponentially increasing, widely accessible technological advance and rapid globalisation create threats of intentional misuse (e.g. biological or nuclear terrorism) and global collective action problems, such as the economic inequality between developed and developing countries and anthropogenic climate change, which human psychology is not set up to address. We have hypothesized that these limitations are the result of the evolutionary function of morality being to maximize the fitness of small cooperative groups competing for resources. Because these limitations of human moral psychology pose significant obstacles to coping with the current moral mega-problems, we argued that biomedical modification of human moral psychology may be necessary.  We have not argued that biomedical moral enhancement would be a single “magic
bullet” but rather that it could play a role in a comprehensive approach which also features cultural and social measures.

The paper is here.

Friday, November 3, 2017

A fundamental problem with Moral Enhancement

Joao Fabiano
Practical Ethics
Originally posted October 13, 2017

Moral philosophers often prefer to conceive thought experiments, dilemmas and problem cases of single individuals who make one-shot decisions with well-defined short-term consequences. Morality is complex enough that such simplifications seem justifiable or even necessary for philosophical reflection.  If we are still far from consensus on which is the best moral theory or what makes actions right or wrong – or even if such aspects should be the central problem of moral philosophy – by considering simplified toy scenarios, then introducing group or long-term effects would make matters significantly worse. However, when it comes to actually changing human moral dispositions with the use of technology (i.e., moral enhancement), ignoring the essential fact that morality deals with group behaviour with long-ranging consequences can be extremely risky. Despite those risks, attempting to provide a full account of morality in order to conduct moral enhancement would be both simply impractical as well as arguably risky. We seem to be far away from such account, yet there are pressing current moral failings, such as the inability for proper large-scale cooperation, which makes the solution to present global catastrophic risks, such as global warming or nuclear war, next to impossible. Sitting back and waiting for a complete theory of morality might be riskier than attempting to fix our moral failing using incomplete theories. We must, nevertheless, proceed with caution and an awareness of such incompleteness. Here I will present several severe risks from moral enhancement that arise from focusing on improving individual dispositions while ignoring emergent societal effects and point to tentative solutions to those risks. I deem those emergent risks fundamental problems both because they lie at the foundation of the theoretical framework guiding moral enhancement – moral philosophy – and because they seem, at the time, inescapable; my proposed solution will aim at increasing awareness of such problems instead of directly solving them.

The article is here.

Monday, October 9, 2017

Would We Even Know Moral Bioenhancement If We Saw It?

Wiseman H.
Camb Q Healthc Ethics. 2017;26(3):398-410.

Abstract

The term "moral bioenhancement" conceals a diverse plurality encompassing much potential, some elements of which are desirable, some of which are disturbing, and some of which are simply bland. This article invites readers to take a better differentiated approach to discriminating between elements of the debate rather than talking of moral bioenhancement "per se," or coming to any global value judgments about the idea as an abstract whole (no such whole exists). Readers are then invited to consider the benefits and distortions that come from the usual dichotomies framing the various debates, concluding with an additional distinction for further clarifying this discourse qua explicit/implicit moral bioenhancement.

The article is here, behind a paywall.

Email the author directly for a personal copy.

Friday, May 5, 2017

The Duty to be Morally Enhanced

Persson, I. & Savulescu, J.
Topoi (2017)
doi:10.1007/s11245-017-9475-7

Abstract

We have a duty to try to develop and apply safe and cost-effective means to increase the probability that we shall do what we morally ought to do. It is here argued that this includes biomedical means of moral enhancement, that is, pharmaceutical, neurological or genetic means of strengthening the central moral drives of altruism and a sense of justice. Such a strengthening of moral motivation is likely to be necessary today because common-sense morality having its evolutionary origin in small-scale societies with primitive technology will become much more demanding if it is revised to serve the needs of contemporary globalized societies with an advanced technology capable of affecting conditions of life world-wide for centuries to come.

The article is here.

Friday, April 14, 2017

The moral bioenhancement of psychopaths

Elvio Baccarini and Luca Malatesti
The Journal of Medical Ethics
http://dx.doi.org/10.1136/medethics-2016-103537

Abstract

We argue that the mandatory moral bioenhancement of psychopaths is justified as a prescription of social morality. Moral bioenhancement is legitimate when it is justified on the basis of the reasons of the recipients. Psychopaths expect and prefer that the agents with whom they interact do not have certain psychopathic traits. Particularly, they have reasons to require the moral bioenhancement of psychopaths with whom they must cooperate. By adopting a public reason and a Kantian argument, we conclude that we can justify to a psychopath being the recipient of mandatory moral bioenhancement because he has a reason to require the application of this prescription to other psychopaths.

Sunday, March 26, 2017

Moral Enhancement Using Non-invasive Brain Stimulation

R. Ryan Darby and Alvaro Pascual-Leone
Front. Hum. Neurosci., 22 February 2017
https://doi.org/10.3389/fnhum.2017.00077

Biomedical enhancement refers to the use of biomedical interventions to improve capacities beyond normal, rather than to treat deficiencies due to diseases. Enhancement can target physical or cognitive capacities, but also complex human behaviors such as morality. However, the complexity of normal moral behavior makes it unlikely that morality is a single capacity that can be deficient or enhanced. Instead, our central hypothesis will be that moral behavior results from multiple, interacting cognitive-affective networks in the brain. First, we will test this hypothesis by reviewing evidence for modulation of moral behavior using non-invasive brain stimulation. Next, we will discuss how this evidence affects ethical issues related to the use of moral enhancement. We end with the conclusion that while brain stimulation has the potential to alter moral behavior, such alteration is unlikely to improve moral behavior in all situations, and may even lead to less morally desirable behavior in some instances.

The article is here.

Thursday, December 29, 2016

The Tragedy of Biomedical Moral Enhancement

Stefan Schlag
Neuroethics (2016). pp 1-13.
doi:10.1007/s12152-016-9284-5

Abstract

In Unfit for the Future, Ingmar Persson and Julian Savulescu present a challenging argument in favour of biomedical moral enhancement. In light of the existential threats of climate change, insufficient moral capacities of the human species seem to require a cautiously shaped programme of biomedical moral enhancement. The story of the tragedy of the commons creates the impression that climate catastrophe is unavoidable and consequently gives strength to the argument. The present paper analyses to what extent a policy in favour of biomedical moral enhancement can thereby be justified and puts special emphasis on the political context. By reconstructing the theoretical assumptions of the argument and by taking them seriously, it is revealed that the argument is self-defeating. The tragedy of the commons may make moral enhancement appear necessary, but when it comes to its implementation, a second-order collective action-problem emerges and impedes the execution of the idea. The paper examines several modifications of the argument and shows how it can be based on easier enforceability of BME. While this implies enforcement, it is not an obstacle for the justification of BME. Rather, enforceability might be the decisive advantage of BME over other means. To take account of the global character of climate change, the paper closes with an inquiry of possible justifications of enforced BME on a global level. The upshot of the entire line of argumentation is that Unfit for the Future cannot justify BME because it ignores the nature of the problem of climate protection and political prerequisites of any solution.

The article is here.

Friday, December 9, 2016

Moral neuroenhancement

Earp, B. D., Douglas, T., & Savulescu, J. (forthcoming). Moral neuroenhancement. In S. Johnson & K. Rommelfanger (eds.),  Routledge Handbook of Neuroethics.  New York: Routledge.

Abstract

In this chapter, we introduce the notion of moral neuroenhancement, offering a novel definition as well as spelling out three conditions under which we expect that such neuroenhancement would be most likely to be permissible (or even desirable). Furthermore, we draw a distinction between first-order moral capacities, which we suggest are less promising targets for neurointervention, and second-order moral capacities, which we suggest are more promising. We conclude by discussing concerns that moral neuroenhancement might restrict freedom or otherwise misfire, and argue that these concerns are not as damning as they may seem at first.

The book chapter is here.

Friday, August 5, 2016

Moral Enhancement and Moral Freedom: A Critical Analysis

By John Danaher
Philosophical Disquisitions
Originally published July 19, 2016

The debate about moral neuroenhancement has taken off in the past decade. Although the term admits of several definitions, the debate primarily focuses on the ways in which human enhancement technologies could be used to ensure greater moral conformity, i.e. the conformity of human behaviour with moral norms. Imagine you have just witnessed a road rage incident. An irate driver, stuck in a traffic jam, jumped out of his car and proceeded to abuse the driver in the car behind him. We could all agree that this contravenes a moral norm. And we may well agree that the proximate cause of his outburst was a particular pattern of activity in the rage circuit of his brain. What if we could intervene in that circuit and prevent him from abusing his fellow motorists? Should we do it?

Proponents of moral neuroenhancement think we should — though they typically focus on much higher stakes scenarios. A popular criticism of their project has emerged. This criticism holds that trying to ensure moral conformity comes at the price of moral freedom. If our brains are prodded, poked and tweaked so that we never do the wrong thing, then we lose the ‘freedom to fall’ — i.e. the freedom to do evil. That would be a great shame. The freedom to do the wrong thing is, in itself, an important human value. We would lose it in the pursuit of greater moral conformity.

Tuesday, June 28, 2016

Moral enhancements 2

By Michelle Ciurria
Moral Responsibility Blog
Originally published June 4, 2016

Here is an excerpt:

Here, I want to consider whether intended moral enhancements – those intended to induce pro-moral effects – can, somewhat paradoxically, undermine responsibility. I say ‘intended’ because, as we saw, moral interventions can have unintended (even counter-moral) consequences. This can happen for any number of reasons: the intervener can be wrong about what morality requires (imagine a Nazi intervener thinking that anti-Semitism is a pro-moral trait); the intervention can malfunction over time; the intervention can produce traits that are moral in one context but counter-moral in another (which seems likely, given that traits are highly context-sensitive, as I mentioned earlier); and so on – I won’t give a complete list. Even extant psychoactive drugs – which can count as a type of passive intervention – typically come with adverse side-effects; but the risk of unintended side-effects for futuristic interventions of a moral nature is substantially greater and more worrisome, because the technology is new, it operates on complicated cognitive structures, and it specifically operates on those structures constitutive of a person’s moral personality. Since intended moral interventions do not always produce their intended effects (pro-moral effects), I’ll discuss these interventions under two guises: interventions that go as planned and induce pro-moral traits (effective cases), and interventions that go awry (ineffective cases). I’ll also focus on the most controversial case of passive intervention: involuntary intervention, without informed consent.

The blog post is here.

Monday, June 27, 2016

Moral enhancements & moral responsibility

By Michelle Ciurria
Moral Responsibility Blog
Originally published May 25, 2016

Here is an excerpt:

What are our duties with respect to moral enhancements? We can approach this question from two directions: our individual duty to use or submit to moral interventions, and our duty to provide or administer them to people with moral deficits. This might seem to suggest a distinction between self-regarding duties and other-regarding duties, but this is a false dichotomy because the duty to enhance oneself is partly a duty to others – a duty to equip oneself to respect other people’s rights and interests. So both duties have an other-regarding dimension. The distinction I’m talking about is between duties to enhance oneself, and duties to enhance other other people: self-directed duties and other-directed duties.

These two duties also cannot be neatly demarcated because we might need to weigh self-directed duties against other-directed duties to achieve a proper balance. That is, given finite time and resources, my duty to enhance myself in some way might be outweighed by my duty to foster the capabilities of another person. So we need to work out a proper balance, and different normative frameworks will provide different answers. All frameworks, however, seem to support these two kinds of duties, though they balance them differently.

The article is here.

Wednesday, June 8, 2016

Are You Morally Modified?: The Moral Effects of Widely Used Pharmaceuticals

Neil Levy, Thomas Douglas, Guy Kahane, Sylvia Terbeck, Philip J. Cowen, Miles
Hewstone, and Julian Savulescu
Philos Psychiatr Psychol. 2014 June 1; 21(2): 111–125.
doi:10.1353/ppp.2014.0023.

Abstract

A number of concerns have been raised about the possible future use of pharmaceuticals designed
to enhance cognitive, affective, and motivational processes, particularly where the aim is to
produce morally better decisions or behavior. In this article, we draw attention to what is arguably
a more worrying possibility: that pharmaceuticals currently in widespread therapeutic use are
already having unintended effects on these processes, and thus on moral decision making and
morally significant behavior. We review current evidence on the moral effects of three widely
used drugs or drug types: (i) propranolol, (ii) selective serotonin reuptake inhibitors, and (iii)
drugs that effect oxytocin physiology. This evidence suggests that the alterations to moral decision
making and behavior caused by these agents may have important and difficult-to-evaluate
consequences, at least at the population level. We argue that the moral effects of these and other
widely used pharmaceuticals warrant further empirical research and ethical analysis.

The paper is here.

Soon we’ll use science to make people more moral

By James J. Hughes
The Washington Post
Originally posted May 19, 2016

Here is an excerpt:

he emerging debate over the use of drugs and devices for moral enhancement has had three principal viewpoints: those who focus on boosting moral sentiments such as empathy; those who would just boost moral reasoning; and the skeptics. While the former two groups accept the goal of moral enhancement — and differ over the best method — the skeptics reject the project. They argue that moral enhancement therapies are overhyped, and that even if morality drugs were effective, they would be bad for our character to rely on them.

It is certainly true that the initial enthusiasm for certain moral enhancement therapies has been tempered by subsequent research. Dozens of studies have suggested that genes that regulate oxytocin, the “cuddle hormone,” affect trust and empathy, and that empathy is boosted when subjects snort oxytocin. But it now appears that the effects of boosting oxytocin were over-reported and that some of the hormone’s effects are less than cuddly — oxytocin tends to boost empathy only for people like us, increasing ethnocentric “in-group bias.”

The article is here.

Wednesday, May 11, 2016

Procedural Moral Enhancement

G. Owen Schaefer and Julian Savulescu
Neuroethics  pp 1-12
First online: 20 April 2016

Abstract

While philosophers are often concerned with the conditions for moral knowledge or justification, in practice something arguably less demanding is just as, if not more, important – reliably making correct moral judgments. Judges and juries should hand down fair sentences, government officials should decide on just laws, members of ethics committees should make sound recommendations, and so on. We want such agents, more often than not and as often as possible, to make the right decisions. The purpose of this paper is to propose a method of enhancing the moral reliability of such agents. In particular, we advocate for a procedural approach; certain internal processes generally contribute to people’s moral reliability. Building on the early work of Rawls, we identify several particular factors related to moral reasoning that are specific enough to be the target of practical intervention: logical competence, conceptual understanding, empirical competence, openness, empathy and bias. Improving on these processes can in turn make people more morally reliable in a variety of contexts and has implications for recent debates over moral enhancement.