Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Agents. Show all posts
Showing posts with label Moral Agents. Show all posts

Tuesday, July 17, 2018

The Rise of the Robots and the Crisis of Moral Patiency

John Danaher
Pre-publication version of AI and Society

Abstract

This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, and thereby reduce them to moral patients. Since that ability and willingness is central to the value system in modern liberal democratic states, the crisis of moral patiency has a broad civilization-level significance: it threatens something that is foundational to and presupposed in much contemporary moral and political discourse. I defend this argument in three parts. I start with a brief analysis of an analogous argument made (or implied) in pop culture. Though those arguments turn out to be hyperbolic and satirical, they do prove instructive as they illustrates a way in which the rise of robots could impact upon civilization, even when the robots themselves are neither malicious nor powerful enough to bring about our doom. I then introduce the argument from the crisis of moral patiency, defend its main premises and address objections.

The paper is here.

Thursday, March 15, 2018

Computing and Moral Responsibility

Noorman, Merel
The Stanford Encyclopedia of Philosophy (Spring 2018 Edition), Edward N. Zalta (ed.)

Traditionally philosophical discussions on moral responsibility have focused on the human components in moral action. Accounts of how to ascribe moral responsibility usually describe human agents performing actions that have well-defined, direct consequences. In today’s increasingly technological society, however, human activity cannot be properly understood without making reference to technological artifacts, which complicates the ascription of moral responsibility (Jonas 1984; Waelbers 2009). As we interact with and through these artifacts, they affect the decisions that we make and how we make them (Latour 1992). They persuade, facilitate and enable particular human cognitive processes, actions or attitudes, while constraining, discouraging and inhibiting others. For instance, internet search engines prioritize and present information in a particular order, thereby influencing what internet users get to see. As Verbeek points out, such technological artifacts are “active mediators” that “actively co-shape people’s being in the world: their perception and actions, experience and existence” (2006, p. 364). As active mediators, they change the character of human action and as a result it challenges conventional notions of moral responsibility (Jonas 1984; Johnson 2001).

Computing presents a particular case for understanding the role of technology in moral responsibility. As these technologies become a more integral part of daily activities, automate more decision-making processes and continue to transform the way people communicate and relate to each other, they further complicate the already problematic tasks of attributing moral responsibility. The growing pervasiveness of computer technologies in everyday life, the growing complexities of these technologies and the new possibilities that they provide raise new kinds of questions: who is responsible for the information published on the Internet? Who is responsible when a self-driving vehicle causes an accident? Who is accountable when electronic records are lost or when they contain errors? To what extent and for what period of time are developers of computer technologies accountable for untoward consequences of their products? And as computer technologies become more complex and behave increasingly autonomous can or should humans still be held responsible for the behavior of these technologies?

The entry is here.

Tuesday, June 3, 2014

Does Belief in Free Will Make Us Better People?

By Jonathan Schooler
Big Questions Online
Originally published August 12, 2013

Resolving what to think about free will is itself a choice. Like many other important decisions, there may be alternatives that are better or worse for each of us, but no single conclusion is necessarily appropriate for everyone.

Too often scholars treat the topic of free will as if there currently exists a single indisputably “correct” perspective. However, the sheer variety of accounts of whether and how our choices control our actions demonstrates that this issue is far from resolved.

Given this lack of consensus, each one of us is faced with deciding for ourselves where we stand on an issue that may have important consequences for how we lead our lives. Increasing evidence suggests that people’s views about free will bear on their pro-social behaviors, sense of personal control, and general well being.

The entire story is here.

Editor's note: Psychologists often provide feedback to their patients about responsibility, choice, options, and autonomy.  In essence, psychologists have, if nothing else, a folk view of free will and it becomes part of the therapeutic relationship.  The articles on free will are meant to provoke self-reflection on our views of free will and how these are expressed in psychotherapy.  This topic may become a future podcast.

Thursday, September 19, 2013

To Bee or not To Bee: Punishment, bee keeping, and the virtue of making choices

By Katrina Siefferd
Psychiatric Ethics Blog - Kerry Gutridge
Originally posted August 28, 2013

Here is an excerpt:

Recently I’ve been thinking about the importance of rehabilitative programs and alternative sentencing from the perspective of Aristotelian virtue theory. Virtue theory supports such programs as an important way to recognize offenders’ moral agency. Moral agency involves the ability of a person to act such that their actions deserve praise or blame. Virtue theory sees choice-making as the primary means for a moral agent to develop and exercise character traits: by choosing generous actions one becomes more generous, and in turn, being generous allows one to choose generous actions more easily. The theory provides a means for critiquing punishments that unfairly impose upon this process of moral development.

The Aristotelian label for this process – where character traits like honesty, kindness and courage become stable – is “habituation.” Habituation involves practicing the trait via the use of practical reason, which allows a person to determine which actions are appropriate in any given situation. A stable disposition to act in accordance with a trait, such as honesty, is established as a result of making appropriately honest choices over time and in a variety of circumstances. However, even stable traits do not dictate automatic behavioral responses: if they were, changes in character would be impossible. Instead, traits should be seen as flexible reasons-responsive dispositions to behave that are in constant development or decline, depending on the choices that one makes (see Annas 2011; Webber 2006).

The entire blog post is here.

Saturday, July 6, 2013

Mind and Morality

Published by Steven Novella
Neurologicablog
Originally published June 18, 2013

One of the themes of this blog, reflecting my skeptical philosophy, is that our brains construct reality – meaning that our perceptions, memories, internal model of reality, narrative of events, and emotions are all constructed artifacts of our neurological processing. This is, in my opinion, an undeniable fact revealed by neuroscience.

This realization, in turn, leads to neuropsychological humility – putting our perceptions, memories, thoughts, and feelings into a proper perspective. Thinking that you know what you saw, or you remember clearly, or that your “gut” feeling is a reliable moral compass, is nothing but naive arrogance.

Perhaps the most difficult aspect of constructed reality to fully accept is our morality. When we have a deep moral sense of what is right and wrong, we feel as if the universe dictates that it is so. Our moral senses feel objectively right to us. But this too is just an illusion, an evolved construction of our brains.

Before I go on, let me point out that this does not mean morality is completely relative. I discuss the issue here and here, and if you have lots of time on your hands you can wade through the hundreds of following comments.

The neurologically constructed nature of morality means that neuroscientists (including psychologists) can investigate how our morals are constructed, just like anything else the brain does. A recent series of experiments published in Psychological Sciences did just that.

The entire blog post is here.

Friday, July 5, 2013

Objective vs Subjective Morality

Published by Steven Novella
NeurologicaBlog
Originally published January 11, 2013

I am fascinated by the philosophy of ethics, ever since I took a course in it in undergraduate school. This is partly because I enjoy thinking about complex systems (which partly explains why I ended up in Neurology as my specialty). I also greatly enjoy logic, and particularly deconstructing arguments (my own and others) to identify their logical essence and see if or where they go wrong.

In a previous post I wrote about the philosophy of morality. This spawned over 400 comments (so far), so it seems we could use another post to reset the conversation.

The discussion is between objective vs subjective morality, mostly focusing around a proponent of objective morality (commenter nym of Zach). Here I will lay out my position for a philosophical basis of morality and explain why I think objective morality is not only unworkable, it’s a fiction.

First, let’s define “morality” and discuss why it is needed. Morality is a code of behavior that aspires to some goal that is perceived as good. The question at hand is where do morals and morality come from. I think this question is informed by the question of why we need morals in the first place.

I maintain that morals can only be understood in the context of the moral actor. Humans, for example, have emotions and feelings. We care about stuff, about our own well being, about those who love, about our “tribe.” We also have an evolved sense of morality, such as the concepts of reciprocity and justice.

Further, humans are social animals, and in fact we have no choice but to share this planet with each other. Our behavior, therefore, affects others. If we had no cares at all about what happens to us or others, or our actions had no affect on anything but ourselves, then there would be no need for morality, and in fact morality would have no meaning.

We can take as empirical facts, however, that humans have feelings and our actions affect others – these are therefore well-founded premises for a moral system. Philosophers have tried to derive from there further premises as a starting point for a moral system. The goal is to derive the most fundamental principles, or determine the most reasonable first principles, and then proceed carefully from there.

The entire blog post is here.

Friday, May 31, 2013

Not robots: children's perspectives on authenticity, moral agency and stimulant drug treatments

By Ilina Singh
J Med Ethics 2013;39:359-366 doi:10.1136/medethics-2011-100224

Abstract

In this article, I examine children's reported experiences with stimulant drug treatments for attention deficit hyperactivity disorder in light of bioethical arguments about the potential threats of psychotropic drugs to authenticity and moral agency. Drawing on a study that involved over 150 families in the USA and the UK, I show that children are able to report threats to authenticity, but that the majority of children are not concerned with such threats. On balance, children report that stimulants improve their capacity for moral agency, and they associate this capacity with an ability to meet normative expectations. I argue that although under certain conditions stimulant drug treatment may increase the risk of a threat to authenticity, there are ways to minimise this risk and to maximise the benefits of stimulant drug treatment. Medical professionals in particular should help children to flourish with stimulant drug treatments, in good and in bad conditions.

The entire article is here.

Thursday, May 30, 2013

Bioethicists must not allow themselves to become a 'priestly caste'

The increasing use of expert bioethicists has profound anti-democratic implications

By Nathan Emmerich
The Guardian - Political Science Blog
Originally published May 18, 2013

In a secular age it might seem that the time for moral authorities has passed. However, research in the life sciences and biomedicine has produced a range of moral concerns and prompted the emergence of bioethics; an area of study that specialises in the ethical analysis of these issues. The result has been the emergence of what we might call expert bioethicists, a cadre of professionals who, while logical and friendly, have, nevertheless, been ordained as secular priests.

This suggestion – that there are expert bioethicists – might appear to have profoundly anti-democratic implications. Indeed handling expertise, including scientific expertise, is a central difficulty for democratic societies and its extension into the realm of moral values seems, on the face of it, to compound the problem. Nevertheless the Human Fertilisation and Embryology Authority (HFEA) has constantly made use of expert bioethicists and two members of the recently convened Emerging Science and Bioethics Advisory Committee (ESBAC) are listed as "bioethics specialists".

If we are to govern the biosciences and medical practice effectively there seems to be increasing need for expert bioethicists. Nevertheless, there is a different dynamic to the politics of bioethical expertise precisely because the opinions of bioethical experts cannot be used to obviate those of other moral agents.

This might seem like an odd claim. If there are expert bioethicists surely we should prefer their opinions to those of non-experts? However this is to assume bioethical expertise is modelled on scientific expertise. The idea of the scientist as expert is so strong we often forget that there are other forms of expertise.

The entire post is here.