Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Agency. Show all posts
Showing posts with label Agency. Show all posts

Saturday, June 20, 2015

Mind Over Masters: The Question of Free Will

World Science Festival
Originally streamed May 30, 2015

Do we make conscious decisions? Or, as many scientists and philosophers argue, are all of our actions predetermined? And if they are predetermined—if we don't have free will—are we responsible for what we do? These are questions that have been debated for centuries, but now neurotechnology is allowing scientists to study brain activity neuron by neuron to try to determine how and when our brains decide to act. With neuroscientists, psychologists, and philosophers we’ll use the latest findings to explore the question of just how much agency we have in the world, and how the answer impacts our ethics, our behavior, and our society.


Saturday, March 7, 2015

Traditional and Experimental Approaches to Free Will and Moral Responsibility

By Gunnar Björnsson and Derk Pereboom
Forthc., Justin Sytsma & Wesley Buckwalter (eds.)
Companion to Experimental Philosophy, Blackwell

1. Introduction

From the early days of experimental philosophy, attention has been focused on the problem of free will and moral responsibility. This is a natural topic for this methodology, given its  proximity to the universal concerns of human life, together with the intensity with which the issues are disputed. We’ll begin by introducing the problem and the standard terminology used to frame it in the philosophical context. We’ll then turn to the contributions of experimental philosophy, and the prospects for the use of this methodology in the area.

The problem of free will and moral responsibility arises from a conflict between two  powerful considerations. On the one hand, we human beings typically believe that we are in control of our actions in a particularly weighty sense. We express this sense of difference when we attribute moral responsibility to human beings but not, for example, to machines like thermostats and computers. Traditionally, it’s supposed that moral responsibility requires us to have some type of free will in producing our actions, and hence we assume that humans,  by contrast with such machines, have this sort of free will. At the same time, there are reasons for regarding human beings as relevantly more like mechanical devices than we ordinarily imagine. These reasons stem from various sources: most prominently, from scientific views that consider human beings to be components of nature and therefore governed by natural laws, and from theological concerns that require everything that occurs to be causally determined by God.

One threat to our having the sort of free will required for moral responsibility results from the view that the natural laws are deterministic, which motivates the position that all of our actions are causally determined by factors beyond our control. An action will be causally determined in this way if a process governed by the laws of nature and beginning with causally relevant factors prior to the agent’s coming to be ensures the occurrence of the action. An action will also be causally determined by factors beyond the agent’s control if its occurrence is ensured by a causal process that originates in God’s will and ends with the action. For many contemporary philosophers, the first, naturalistic version of causal determinism about action is a serious possibility, and thus the threat that it poses to our conception of ourselves as morally responsible for our actions is serious and prevalent.

The entire chapter is here.

Thursday, February 12, 2015

Dimensions of Moral Emotions

By Kurt Gray and Daniel M. Wegner
Emotion Review Vol. 3, No. 3 (July 2011) 258–260

Abstract

Anger, disgust, elevation, sympathy, relief. If the subjective experience of each of these emotions is the same whether elicited by moral or nonmoral events, then what makes moral emotions unique? We suggest that the configuration of moral emotions is special—a configuration given by the underlying structure of morality. Research suggests that people divide the moral world along the two dimensions of valence (help/harm) and moral type (agent/patient). The intersection of these two dimensions gives four moral exemplars—heroes, villains, victims and beneficiaries—each of which elicits unique emotions. For example, victims (harm/patient) elicit sympathy and sadness. Dividing moral emotions into these four quadrants provides predictions about which emotions reinforce, oppose and complement each other.

The entire article is here.

Friday, January 16, 2015

My brain made me do it, but does that matter?

By Walter Sinnott-Armstrong
The Conversation
Originally published December 12, 2014

Here is an excerpt:

Despite some rhetoric, almost nobody really believes that the fact that your brain made you do it is by itself enough to excuse you from moral responsibility. On the other side, almost everybody agrees that some brain states, such as seizures, do remove moral responsibility. The real issues lie in the middle.

What about mental illnesses? Addictions? Compulsions? Brainwashing? Hypnosis? Tumors? Coercion? Alien hand syndrome? Multiple personality disorder? These cases are all tricky, so philosophers disagree about which people in these conditions are responsible — and why. Nonetheless, these difficult cases do not show that there is no difference between seizures and normal desires, just as twilight does not show that there is no difference between night and day. It is hard to draw a line, but that does not mean that there is no line.

The entire article is here.

Tuesday, December 30, 2014

The Dark Side of Free Will

Published on Dec 9, 2014

This talk was given at a local TEDx event, produced independently of the TED Conferences. What would happen if we all believed free will didn't exist? As a free will skeptic, Dr. Gregg Caruso contends our society would be better off believing there is no such thing as free will.

Wednesday, December 24, 2014

Don't Execute Schizophrenic Killers

By Sally L. Satel
Bloomberg View
Originally posted December 1, 2014

Is someone who was diagnosed with schizophrenia years before committing murder sane enough to be sentenced to death?

The government thinks so in the case of Scott L. Panetti, 56, who will die on Wednesday by lethal injection in Texas unless Governor Rick Perry stays the execution.

(cut)

This is unjust. It is wrong to execute, even to punish, people who are so floridly psychotic when they commit their crimes that they are incapable of correcting the errors by logic or evidence.

Yet Texas, like many other states, considers a defendant sane as long as he knows, factually, that murder is wrong. Indeed, Panetti’s jury, which was instructed to apply this narrow standard, may have been legally correct to reject his insanity defense because he may have known that the murders were technically wrong.

The entire article is here.

Monday, December 15, 2014

Implicit Bias and Moral Responsibility: Probing the Data.

By Neil Levy

Abstract

Psychological research strongly suggests that many people harbor implicit attitudes that
diverge from their explicit attitudes, and that under some conditions these people can be
expected to perform actions that owe their moral character to the agent’s implicit attitudes. In
this paper, I pursue the question whether agents are morally responsible for these actions by
probing the available evidence concerning the kind of representation an implicit attitude is.
Building on previous work, I argue that the reduction in the degree and kind of reasons sensitivity
these attitudes display undermines agents’ responsibility-level control over the moral
character of actions. I also argue that these attitudes do not fully belong to agents’ real selves in
ways that would justify holding them responsible on accounts that centre on attributability.

The entire article is here.

Wednesday, December 3, 2014

Moral Psychology as Accountability

By Brendan Dill and Stephen Darwall
[In Justin D’Arms & Daniel Jacobson (eds.),  Moral Psychology and Human Agency:  Philosophical Essays on the Science of Ethics  (pp. 40-83). Oxford University Press. Pre-publication draft. For citation or quotation, please refer to the published volume.

Introduction

When moral psychology exploded a decade ago with groundbreaking research, there was considerable excitement about the potential fruits of collaboration between moral philosophers and moral psychologists. However, this enthusiasm soon gave way to controversy about whether either field was, or even could be, relevant to the other (e.g., Greene 2007; Berker 2009). After all, it seems at first glance that the primary question researched by moral psychologists—how people form judgments about what is morally right and wrong—is independent from the parallel question investigated by moral  philosophers—what is in fact morally right and wrong, and why.

Once we transcend the narrow bounds of quandary ethics and “trolleyology,” however, a broader look at the fields of moral psychology and moral philosophy reveals several common interests. Moral philosophers strive not only to determine what actions are morally right and wrong, but also to understand our moral concepts, practices, and  psychology. They ask what it means to be morally right, wrong, or obligatory: what distinguishes moral principles from other norms of action, such as those of instrumental rationality, prudence, excellence, or etiquette (Anscombe 1958; Williams 1985; Gibbard 1990; Annas 1995)? Moral psychologists pursue this very question in research on the distinction between moral and conventional rules (Turiel 1983; Nichols 2002; Kelly et al. 2007; Royzman, Leeman, and Baron 2009) and in attempts to define the moral domain (e.g., Haidt and Kesebir 2010).

The entire paper is here.

Monday, December 1, 2014

Blame as Harm

By Patrick Mayer
Academia.edu

I. Introduction

Among philosophers who work on the topic of moral responsibility there is widespread agreement with the claim that when we debate over the nature and existence of moral responsibility we are not talking about punishment. To say that someone is morally responsible for a bad action is not to say that she ought to be punished for it, nor does saying that moral responsibility is a fiction imply that you think punishment is illegitimate. Moral responsibility is about praiseworthiness and blameworthiness. You are morally responsible for some action iff it is either appropriate to praise you, appropriate to blame or would have been so had the action been morally significant in one way or another.

In this paper ‘Incompatibilism’ will be the name of the view that moral responsibility is incompatible with determinism. So according to Incompatibilism it is never appropriate to praise or blame someone. Why? Different incompatibilists will give you different answers. One might answer by saying that it is a conceptual or linguistic fact that blameworthiness is incompatible with determinism. An example would be saying that the definition of ‘blameworthy’ or the concept of blameworthiness contains within it a claim that for an agent to be blameworthy for X it must have been possible for the agent to do something other than X. On this way of thinking about incompatibilism if someone believes that determinism is true and they believe that someone is blameworthy then they accept contradictory claims and are therefore irrational.

Another way to answer the question is to say not that believing someone blameworthy would be inconsistent with a belief in determinism but to say that to blame someone would be unfair if determinism were true. This second way to answer I will call ‘Fairness Incompatibilism.’ There are advantages to adopting Fairness Incompatibilism. One, and probably the historically most important reason, is that by adopting Fairness Incompatibilism one can answer a criticism made by P.F. Strawson against incompatibilism.  Strawson claims that the practice of reacting emotionally to people, a practice many have treated as equivalent to blaming and praising, stands in no need of an external metaphysical  justification. This is meant to rule out the demand, made by incompatibilists, that morally responsible agents have a form of agency that implies indeterminism. But considerations of fairness are internal to the practice of reacting emotionally to people, and so if the case for incompatibilism is made by appeal to the concept of fairness then whether Strawson’s claim about the immunity of our practice from purely metaphysical considerations, incompatibilism can still go through. Another motivation for accepting Fairness Incompatibilism is that many have the intuition that if determinism is true then when we blame people we are doing something wrong to them, treating them in a way they do not deserve.

The entire article is here.

Saturday, August 30, 2014

Free Will & Moral Responsibility in a Secular Society

By Michael Shermer
TAM 2014
Originally posted August 10, 2014

Michael Shermer, PhD presents theory and research on understanding the concepts of free will, moral responsibility and agency in current American society.  He draws from neuroscience, social psychology, and comparative psychology to develop ideas about how moral emotions play a part in understanding moral responsibility and culpability.

 

Tuesday, August 26, 2014

Ethics and the Brains of Psychopaths

The Significance of Psychopaths for Ethical and Legal Reasoning

William Hirstein and Katrina Sifferd
Elmhurst College

Abstract

The emerging neuroscience of psychopathy will have several important implications for our attempts to construct an ethical society. In this article we begin by describing the list of criteria by which psychopaths are diagnosed. We then review four competing neuropsychological theories of psychopathic cognition.  The first of these models, Newman‘s attentional model, locates the problem in a special type of attentional narrowing that psychopaths have shown in experiments. The second and third, Blair‘s amygdala model and Kiehl‘s paralimbic model represent the psychopath‘s problem as primarily emotional , including reduced tendency to experience fear in normally fearful situations, and a failure to attach the proper significance to the emotions of others. The fourth model locates the problem at a higher level: a failure of  psychopaths to notice and correct for their attentional or emotional problems using ―executive processes.  In normal humans, decisions are accomplished via these executive processes, which are responsible for planning actions, or inhibiting unwise actions, as well as allowing emotions to influence cognition in the proper way. We review the current state of knowledge of the executive capacities of psychopaths. We then evaluate psychopaths in light of the three major  philosophical theories of ethics, utilitarianism, deontological theory, and virtue ethics. Finally,we turn to the difficulty psychopath offenders pose to criminal law, because of the way psychopathy interacts with the various justifications and functions of punishment. We concludewith a brief consideration of the effects of psychopaths on contemporary social structures.

The entire article is here.

Monday, August 4, 2014

Ethics & Free Will

by Mike LaBossiere
Talking Philosophy Blog
Originally published on July 18, 2014

Here is an excerpt:

One impact is that when people have doubts about free will they tend to have less support for retributive punishment. Retributive punishment, as the name indicates, is punishment aimed at making a person suffer for her misdeeds. Doubt in free will did not negatively impact a person’s support for punishment aimed at deterrence or rehabilitation.

While the authors do consider one reason for this, namely that those who doubt free will would regard wrongdoers as analogous to harmful natural phenomenon that need to dealt with rather than subject to vengeance, this view also matches a common view about moral accountability. To be specific, moral (and legal) accountability is generally proportional to the control a person has over events. To use a concrete example, consider the difference between these two cases. In the first case, Sally is driving well above the speed limit and is busy texting and sipping her latte. She doesn’t see the crossing guard frantically waving his sign and runs over the children in the cross walk. In case two, Jane is driving the speed limit and children suddenly run directly in front of her car. She brakes and swerves immediately, but she hits the children. Intuitively, Sally has acted in a way that was morally wrong—she should have been going the speed limit and she should have been paying attention. Jane, though she hit the children, did not act wrongly—she could not have avoided the children and hence is not morally responsible.

The entire blog post is here.

Friday, August 1, 2014

Is Neurolaw Conceptually Confused?

By Neil Levy
J Ethics. 2014 Jun 1;18(2):171-185.

Abstract

In Minds, Brains, and Law, Michael Pardo and Dennis Patterson argue that current attempts to use neuroscience to inform the theory and practice of law founder because they are built on confused conceptual foundations. Proponents of neurolaw attribute to the brain or to its parts psychological properties that belong only to people; this mistake vitiates many of the claims they make. Once neurolaw is placed on a sounder conceptual footing, Pardo and Patterson claim, we will see that its more dramatic claims are false or meaningless, though it might be able to provide inductive evidence for particular less dramatic claims (that a defendant may be lying, or lacks control over their behavior, for instance). In response, I argue that the central conceptual confusions identified by Pardo and Patterson are not confusions at all. Though some of the claims made by its proponents are hasty and sometimes they are confused, there are no conceptual barriers to attributing psychological properties to brain states. Neuroscience can play a role in producing evidence that is more reliable than subjective report or behavior; it therefore holds out the possibility of dramatically altering our self-conception as agents and thereby the law.

The entire article is here.

Moral Hazards & Legal Conundrums of Our Robot-Filled Future

By Greg Miller
Wired
Originally posted July 17, 2014

The robots are coming, and they’re getting smarter. They’re evolving from single-task devices like Roomba and its floor-mopping, pool-cleaning cousins into machines that can make their own decisions and autonomously navigate public spaces. Thanks to artificial intelligence, machines are getting better at understanding our speech and detecting and reflecting our emotions. In many ways, they’re becoming more like us.

Whether you find it exhilarating or terrifying (or both), progress in robotics and related fields like AI is raising new ethical quandaries and challenging legal codes that were created for a world in which a sharp line separates man from machine.

The entire article is here.

Thursday, July 17, 2014

Moral Dilemmas

The Stanford Encyclopedia of Philosophy
Revised June 30, 2014

Here is an excerpt:

What is common to the two well-known cases is conflict. In each case, an agent regards herself as having moral reasons to do each of two actions, but doing both actions is not possible. Ethicists have called situations like these moral dilemmas. The crucial features of a moral dilemma are these: the agent is required to do each of two (or more) actions; the agent can do each of the actions; but the agent cannot do both (or all) of the actions. The agent thus seems condemned to moral failure; no matter what she does, she will do something wrong (or fail to do something that she ought to do).

The Platonic case strikes many as too easy to be characterized as a genuine moral dilemma. For the agent's solution in that case is clear; it is more important to protect people from harm than to return a borrowed weapon. And in any case, the borrowed item can be returned later, when the owner no longer poses a threat to others. Thus in this case we can say that the requirement to protect others from serious harm overrides the requirement to repay one's debts by returning a borrowed item when its owner so demands. When one of the conflicting requirements overrides the other, we do not have a genuine moral dilemma. So in addition to the features mentioned above, in order to have a genuine moral dilemma it must also be true that neither of the conflicting requirements is overridden (Sinnott-Armstrong 1988, Chapter 1).

The entire page is here.

Editor's note: Anyone interested in ethics and morality needs to read this page.  It is an excellent source to understand moral dilemmas as well as ethical dilemmas when in the role of a psychologist.

Tuesday, July 15, 2014

Sexual Assault and Rape Culture

Constructive liberal discourse has been a source of important gains on these issues. The alternatives are toxic.

By Conor Friedersdorf
The Atlantic
Originally posted June 27, 2014

The description of "rape culture" that sums up its insidiousness better than any I've ever seen was published several years ago at the Washington City Paper by Amanda Hess.

"Rape culture does not just encourage men to proceed after she says 'no,'" she wrote. "Rape culture does not simply teach men that a lack of physical resistance is an invitation. Rape culture does not only tell men to assert ownership over whichever female body they desire. Rape culture also tells women not to claim ownership over their own bodies. Rape culture also informs women that they should not desire sex. Rape culture also tells women that saying yes makes them bad women."

The entire article is here.

Thursday, July 3, 2014

Irresponsible brains? The role of consciousness in guilt

By Neil Levy
The Conversation
Originally posted June 5, 2014

Can human beings still be held responsible in the age of neuroscience?

Some people say no: they say once we understand how the brain processes information and thereby causes behaviour, there’s nothing left over for the person to do.

This argument has not impressed philosophers, who say there doesn’t need to be anything left for the person to do in order to be responsible. People are not anything over and above the causal systems involved in information processing, we are our brains (plus some other, equally physical stuff).

The entire article is here.


Sunday, June 29, 2014

Brain Imaging Research Shows How Unconscious Processing Improves Decision-Making

Carnegie Mellon
Press Release
Originally released on February 13, 2013

When faced with a difficult decision, it is often suggested to "sleep on it" or take a break from thinking about the decision in order to gain clarity.

But new brain imaging research from Carnegie Mellon University, published in the journal "Social Cognitive and Affective Neuroscience," finds that the brain regions responsible for making decisions continue to be active even when the conscious brain is distracted with a different task. The research provides some of the first evidence showing how the brain unconsciously processes decision information in ways that lead to improved decision-making.

"This research begins to chip away at the mystery of our unconscious brains and decision-making," said J. David Creswell, assistant professor of psychology in CMU's Dietrich College of Humanities and Social Sciences and director of the Health and Human Performance Laboratory. "It shows that brain regions important for decision-making remain active even while our brains may be simultaneously engaged in unrelated tasks, such as thinking about a math problem. What’s most intriguing about this finding is that participants did not have any awareness that their brains were still working on the decision problem while they were engaged in an unrelated task."

The entire press release is here.

Friday, June 27, 2014

Does 'free will' stem from brain noise?

Press Release
University of California-Davis
Originally published June 9, 2014

Our ability to make choices — and sometimes mistakes — might arise from random fluctuations in the brain's background electrical noise, according to a recent study from the Center for Mind and Brain at the University of California, Davis.

"How do we behave independently of cause and effect?" said Jesse Bengson, a postdoctoral researcher at the center and first author on the paper. "This shows how arbitrary states in the brain can influence apparently voluntary decisions."

The brain has a normal level of "background noise," Bengson said, as electrical activity patterns fluctuate across the brain. In the new study, decisions could be predicted based on the pattern of brain activity immediately before a decision was made.

The entire press release is here.

Wednesday, June 18, 2014

What Are the Implications of the Free Will Debate for Individuals and Society?

By Alfred Mele
Big Questions Online
Originally posted May 6, 2014

Does free will exist? Current interest in that question is fueled by news reports suggesting that neuroscientists have proved it doesn’t. In the last few years, I’ve been on a mission to explain why scientific discoveries haven’t closed the door on free will. To readers interested in a rigorous explanation, I recommend my 2009 book, Effective Intentions. For a quicker read, you might wait for my Free: Why Science Hasn’t Disproved Free Will, to be published this fall.

One major plank in a well-known neuroscientific argument for the nonexistence of free will is the claim that participants in various experiments make their decisions unconsciously. In some studies, this claim is based partly on EEG readings (electrical readings taken from the scalp). In others, fMRI data (about changes in blood oxygen levels in the brain) are used instead. In yet others, with people whose skulls are open for medical purposes, readings are taken directly from the brain. The other part of the evidence comes from participants’ reports on when they first became aware of their decisions. If the reports are accurate (which is disputed), the typical sequence of events is as follows: first, there is the brain activity the scientists focus on, then the participants become aware of decisions (or intentions or urges) to act, and then they act, flexing a wrist or pushing a button, for example.

The entire article is here.