Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Agency. Show all posts
Showing posts with label Moral Agency. Show all posts

Tuesday, April 25, 2023

Responsible Agency and the Importance of Moral Audience

Jefferson, A., Sifferd, K. 
Ethic Theory Moral Prac (2023).
https://doi.org/10.1007/s10677-023-10385-1

Abstract

Ecological accounts of responsible agency claim that moral feedback is essential to the reasons-responsiveness of agents. In this paper, we discuss McGeer’s scaffolded reasons-responsiveness account in the light of two concerns. The first is that some agents may be less attuned to feedback from their social environment but are nevertheless morally responsible agents – for example, autistic people. The second is that moral audiences can actually work to undermine reasons-responsiveness if they espouse the wrong values. We argue that McGeer’s account can be modified to handle both problems. Once we understand the specific roles that moral feedback plays for recognizing and acting on moral reasons, we can see that autistics frequently do rely on such feedback, although it often needs to be more explicit. Furthermore, although McGeer is correct to highlight the importance of moral feedback, audience sensitivity is not all that matters to reasons-responsiveness; it needs to be tempered by a consistent application of moral rules. Agents also need to make sure that they choose their moral audiences carefully, paying special attention to receiving feedback from audiences which may be adversely affected by their actions.

Conclusions

In this paper we raised two challenges to McGeer’s scaffolded reasons-responsiveness account: agents who are less attuned to social feedback such as autistics, and corrupting moral audiences. We found that, once we parsed the two roles that feedback from a moral audience play, autistics provide reasons to revise the scaffolded reasons-responsiveness account. We argued that autistic persons, like neurotypicals, wish to justify their behaviour to a moral audience and rely on their moral audience for feedback. However, autistic persons may need more explicit feedback when it comes to effects their behaviour has on others. They also compensate for difficulties they have in receiving information from the moral audience by justifying action through appeal to moral rules. This shows that McGeer’s view of moral agency needs to include observance of moral rules as a way of reducing reliance on audience feedback. We suspect that McGeer would approve of this proposal, as she mentions that an instance of blame can lead to vocal protest by the target, and a possible renegotiation of norms and rules for what constitutes acceptable behaviour (2019). Consideration of corrupting audiences highlights a different problem from that of resisting blame and renegotiating norms. It draws attention to cases where individual agents must try to go beyond what is accepted in their moral environment, a significant challenge for social beings who rely strongly on moral audiences in developing and calibrating their moral reasons-responsiveness. Resistance to a moral audience requires the capacity to evaluate the action differently; often this will be with reference to a moral rule or principle.

For both neurotypical and autistic individuals, consistent application of moral rules or principles can reinforce and bring back to mind important moral commitments when we are led astray by our own desires or specific (im)moral audiences. But moral audiences still play a crucial role to developing and maintaining reasons-responsiveness. First, they are essential to the development and maintenance of all agents’ moral sensitivity. Second, they can provide an important moral corrective where people may have moral blindspots, especially when they provide insights into ways in which a person has fallen short morally by not taking on board reasons that are not obvious to them. Often, these can be reasons which pertain to the respectful treatment of others who are in some important way different from that person.


In sum: Be responsible and accountable in your actions, as your moral audience is always watching. Doing the right thing matters not just for your reputation, but for the greater good. #ResponsibleAgency #MoralAudience

Sunday, October 9, 2022

A Normative Approach to Artificial Moral Agency

Behdadi, D., Munthe, C.
Minds & Machines 30, 195–218 (2020).
https://doi.org/10.1007/s11023-020-09525-8

Abstract

This paper proposes a methodological redirection of the philosophical debate on artificial moral agency (AMA) in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and responsibility of participants. The proposal is backed up by an analysis of the AMA debate, which is found to be overly caught in the opposition between so-called standard and functionalist conceptions of moral agency, conceptually confused and practically inert. Additionally, we outline some main themes of research in need of attention in light of the suggested normative approach to AMA.

Free will and Autonomy

Several AMA debaters have claimed that free will is necessary for being a moral agent (Himma 2009; Hellström 2012; Friedman and Kahn 1992). Others make a similar (and perhaps related) claim that autonomy is necessary (Lin et al. 2008; Schulzke 2013). In the AMA debate, some argue that artificial entities can never have free will (Bringsjord 1992; Shen 2011; Bringsjord 2007) while others, like James Moor (2006, 2009), are open to the possibility that future machines might acquire free will.Footnote15 Others (Powers 2006; Tonkens 2009) have proposed that the plausibility of a free will condition on moral agency may vary depending on what type of normative ethical theory is assumed, but they have not developed this idea further.

Despite appealing to the concept of free will, this portion of the AMA debate does not engage with key problems in the free will literature, such as the debate about compatibilism and incompatibilism (O’Connor 2016). Those in the AMA debate assume the existence of free will among humans, and ask whether artificial entities can satisfy a source control condition (McKenna et al. 2015). That is, the question is whether or not such entities can be the origins of their actions in a way that allows them to control what they do in the sense assumed of human moral agents.

An exception to this framing of the free will topic in the AMA debate occurs when Johnson writes that ‘… the non-deterministic character of human behavior makes it somewhat mysterious, but it is only because of this mysterious, non-deterministic aspect of moral agency that morality and accountability are coherent’ (Johnson 2006 p. 200). This is a line of reasoning that seems to assume an incompatibilist and libertarian sense of free will, assuming both that it is needed for moral agency and that humans do possess it. This, of course, makes the notion of human moral agents vulnerable to standard objections in the general free will debate (Shaw et al. 2019). Additionally, we note that Johnson’s idea about the presence of a ‘mysterious aspect’ of human moral agents might allow for AMA in the same way as Dreyfus and Hubert’s reference to the subconscious: artificial entities may be built to incorporate this aspect.

The question of sourcehood in the AMA debate connects to the independence argument: For instance, when it is claimed that machines are created for a purpose and therefore are nothing more than advanced tools (Powers 2006; Bryson 2010; Gladden 2016) or prosthetics (Johnson and Miller 2008), this is thought to imply that machines can never be the true or genuine source of their own actions. This argument questions whether the independence required for moral agency (by both functionalists and standardists) can be found in a machine. If a machine’s repertoire of behaviors and responses is the result of elaborate design then it is not independent, the argument goes. Floridi and Sanders question this proposal by referring to the complexity of ‘human programming’, such as genes and arranged environmental factors (e.g. education). 

Saturday, May 15, 2021

Moral zombies: why algorithms are not moral agents

Véliz, C. 
AI & Soc (2021). 

Abstract

In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking about the latter can help us better understand and regulate the former. I contend that the main reason why algorithms can be neither autonomous nor accountable is that they lack sentience. Moral zombies and algorithms are incoherent as moral agents because they lack the necessary moral understanding to be morally responsible. To understand what it means to inflict pain on someone, it is necessary to have experiential knowledge of pain. At most, for an algorithm that feels nothing, ‘values’ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness. But entities that do not feel cannot value, and beings that do not value cannot act for moral reasons.

Conclusion

This paper has argued that moral zombies—creatures that behave like moral agents but lack sentience—are incoherent as moral agents. Only beings who can experience pain and pleasure can understand what it means to inflict pain or cause pleasure, and only those with this moral understanding can be moral agents. What I have dubbed ‘moral zombies’ are relevant because they are similar to algorithms in that they make moral decisions as human beings would—determining who gets which benefits and penalties—without having any concomitant sentience.

There might come a time when AI becomes so sophisticated that robots might possess desires and values of their own. It will not, however, be on account of their computational prowess, but on account of their sentience, which may in turn require some kind of embodiment. At present, we are far from creating sentient algorithms.

When algorithms cause moral havoc, as they often do, we must look to the human beings who designed, programmed, commissioned, implemented, and were supposed to supervise them to assign the appropriate blame. For all their complexity and flair, algorithms are nothing but tools, and moral agents are fully responsible for the tools they create and use.

Thursday, May 13, 2021

Technology and the Value of Trust: Can we trust technology? Should we?

John Danaher
Philosophical Disquisitions
Originally published 30 Mar 21

Can we trust technology? Should we try to make technology, particularly AI, more trustworthy? These are questions that have perplexed philosophers and policy-makers in recent years. The EU’s High Level Expert Group on AI has, for example, recommended that the primary goal of AI policy and law in the EU should be to make the technology more trustworthy. But some philosophers have critiqued this goal as being borderline incoherent. You cannot trust AI, they say, because AI is just a thing. Trust can exist between people and other people, not between people and things.

This is an old debate. Trust is a central value in human society. The fact that I can trust my partner not to betray me is one of the things that makes our relationship workable and meaningful. The fact that I can trust my neighbours not to kill me is one of the things that allows me to sleep at night. Indeed, so implicit is this trust that I rarely think about it. It is one of the background conditions that makes other things in my life possible. Still, it is true that when I think about trust, and when I think about what it is that makes trust valuable, I usually think about trust in my relationships with other people, not my relationships with things.

But would it be so terrible to talk about trust in technology? Should we use some other term instead such as ‘reliable’ or ‘confidence-inspiring’? Or should we, as some blockchain enthusiasts have argued, use technology to create a ‘post-trust’ system of social governance?

I want to offer some quick thoughts on these questions in this article. I will do so in three stages. First, I will briefly review some of the philosophical debates about trust in people and trust in things. Second, I will consider the value of trust, distinguishing between its intrinsic and extrinsic components. Third, I will suggest that it is meaningful to talk about trust in technology, but that the kind of trust we have in technology has a different value to the kind of trust we have in other people. Finally, I will argue that most talk about building ‘trustworthy’ technology is misleading: the goal of most of these policies is to obviate or override the need for trust.

Wednesday, February 17, 2021

Distributed Cognition and Distributed Morality: Agency, Artifacts and Systems

Heersmink, R. 
Sci Eng Ethics 23, 431–448 (2017). 
https://doi.org/10.1007/s11948-016-9802-1

Abstract

There are various philosophical approaches and theories describing the intimate relation people have to artifacts. In this paper, I explore the relation between two such theories, namely distributed cognition and distributed morality theory. I point out a number of similarities and differences in these views regarding the ontological status they attribute to artifacts and the larger systems they are part of. Having evaluated and compared these views, I continue by focussing on the way cognitive artifacts are used in moral practice. I specifically conceptualise how such artifacts (a) scaffold and extend moral reasoning and decision-making processes, (b) have a certain moral status which is contingent on their cognitive status, and (c) whether responsibility can be attributed to distributed systems. This paper is primarily written for those interested in the intersection of cognitive and moral theory as it relates to artifacts, but also for those independently interested in philosophical debates in extended and distributed cognition and ethics of (cognitive) technology.

Discussion

Both Floridi and Verbeek argue that moral actions, either positive or negative, can be the result of interactions between humans and technology, giving artifacts a much more prominent role in ethical theory than most philosophers have. They both develop a non-anthropocentric systems approach to morality. Floridi focuses on large-scale ‘‘multiagent systems’’, whereas Verbeek focuses on small-scale ‘‘human–technology associations’’. But both attribute morality or moral agency to systems comprising of humans and technological artifacts. On their views, moral agency is thus a system property and not found exclusively in human agents. Does this mean that the artifacts and software programs involved in the process have moral agency? Neither of them attribute moral agency to the artifactual components of the larger system. It is not inconsistent to say that the human-artifact system has moral agency without saying that its artifactual components have moral agency.  Systems often have different properties than their components. The difference between Floridi and Verbeek’s approach roughly mirrors the difference between distributed and extended cognition, in that Floridi and distributed cognition theory focus on large-scale systems without central controllers, whereas Verbeek and extended cognition theory focus on small-scale systems in which agents interact with and control an informational artifact. In Floridi’s example, the technology seems semi-autonomous: the software and computer systems automatically do what they are designed to do. Presumably, the money is automatically transferred to Oxfam, implying that technology is a mere cog in a larger socio-technical system that realises positive moral outcomes. There seems to be no central controller in this system: it is therefore difficult to see it as an extended agency whose intentions are being realised.

Sunday, August 16, 2020

Blame-Laden Moral Rebukes and the Morally Competent Robot: A Confucian Ethical Perspective

Zhu, Q., Williams, T., Jackson, B. et al.
Sci Eng Ethics (2020).
https://doi.org/10.1007/s11948-020-00246-w

Abstract

Empirical studies have suggested that language-capable robots have the persuasive power to shape the shared moral norms based on how they respond to human norm violations. This persuasive power presents cause for concern, but also the opportunity to persuade humans to cultivate their own moral development. We argue that a truly socially integrated and morally competent robot must be willing to communicate its objection to humans’ proposed violations of shared norms by using strategies such as blame-laden rebukes, even if doing so may violate other standing norms, such as politeness. By drawing on Confucian ethics, we argue that a robot’s ability to employ blame-laden moral rebukes to respond to unethical human requests is crucial for cultivating a flourishing “moral ecology” of human–robot interaction. Such positive moral ecology allows human teammates to develop their own moral reflection skills and grow their own virtues. Furthermore, this ability can and should be considered as one criterion for assessing artificial moral agency. Finally, this paper discusses potential implications of the Confucian theories for designing socially integrated and morally competent robots.

Wednesday, July 8, 2020

A Normative Approach to Artificial Moral Agency

Behdadi, D., Munthe, C.
Minds & Machines (2020). 
https://doi.org/10.1007/s11023-020-09525-8

Abstract

This paper proposes a methodological redirection of the philosophical debate on artificial moral agency (AMA) in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and responsibility of participants. The proposal is backed up by an analysis of the AMA debate, which is found to be overly caught in the opposition between so-called standard and functionalist conceptions of moral agency, conceptually confused and practically inert. Additionally, we outline some main themes of research in need of attention in light of the suggested normative approach to AMA.

Conclusion

We have argued that to be able to contribute to pressing practical problems, the debate on AMA should be redirected to address outright normative ethical questions. Specifically, the questions of how and to what extent artificial entities should be involved in human practices where we normally assume moral agency and responsibility. The reason for our proposal is the high degree of conceptual confusion and lack of practical usefulness of the traditional AMA debate. And this reason seems especially strong in light of the current fast development and implementation of advanced, autonomous and self-evolving AI and robotic constructs.

Thursday, February 14, 2019

Can artificial intelligences be moral agents?

Bartosz Brożek and Bartosz Janik
New Ideas in Psychology
Available online 8 January 2019

Abstract

The paper addresses the question whether artificial intelligences can be moral agents. We begin by observing that philosophical accounts of moral agency, in particular Kantianism and utilitarianism, are very abstract theoretical constructions: no human being can ever be a Kantian or a utilitarian moral agent. Ironically, it is easier for a machine to approximate this idealised type of agency than it is for homo sapiens. We then proceed to outline the structure of human moral practices. Against this background, we identify two conditions of moral agency: internal and external. We argue further that the existing AI architectures are unable to meet the two conditions. In consequence, machines - at least at the current stage of their development - cannot be considered moral agents.

Here is the conclusion:

The second failure of the artificial agents - to meet the internal condition of moral agency - is connected to the fact that their behaviour is not emotion driven. This makes it impossible for them to fully take part in moral practices. A Kantian or a Benthamian machine, acting on a set of abstract rules, would simply be no fit for the complex, culture-dependent and intuition-based practices of any particular community. Finally, both failures are connected: the more human-like machines become, i.e. the more capable they are of fully participating in moral practices, the more likely it is that they will also be recognised as moral agents.

The info is here.

Sunday, September 9, 2018

People Are Averse to Machines Making Moral Decisions

Yochanan E. Bigman and Kurt Gray
In press, Cognition

Abstract

Do people want autonomous machines making moral decisions? Nine studies suggest that that
the answer is ‘no’—in part because machines lack a complete mind. Studies 1-6 find that people
are averse to machines making morally-relevant driving, legal, medical, and military decisions,
and that this aversion is mediated by the perception that machines can neither fully think nor
feel. Studies 5-6 find that this aversion exists even when moral decisions have positive outcomes.
Studies 7-9 briefly investigate three potential routes to increasing the acceptability of machine
moral decision-making: limiting the machine to an advisory role (Study 7), increasing machines’
perceived experience (Study 8), and increasing machines’ perceived expertise (Study 9).
Although some of these routes show promise, the aversion to machine moral decision-making is
difficult to eliminate. This aversion may prove challenging for the integration of autonomous
technology in moral domains including medicine, the law, the military, and self-driving vehicles.

The research is here.

Sunday, May 15, 2016

Legal Insanity and Executive Function

Katrina Sifferd, William Hirstein, and Tyler Fagan
Under review to be included in The Insanity Defense: Multidisciplinary Views on Its History, Trends, and Controversies (Mark D. White, Ed.) Praeger (expected Nov. 2016)

1. The cognitive capacities relevant to legal insanity

Legal insanity is a legal concept rather than a medical one. This may seem an obvious point, but it is worth reflecting on the divergent purposes and motivations for legal, as opposed to medical, concepts. Medical categories of disease are shaped by the medical professions’ aims of understanding, diagnosing, and treating illness. Categories of legal excuse, on the other hand, serve the aims of determining criminal guilt and punishment.

A theory of legal responsibility and its criteria should exhibit symmetry between the capacities it posits as necessary for moral, and more specifically, legal agency, and the capacities that, when dysfunctional or compromised, qualify a defendant for an excuse. To put this point more strongly, the capacities necessary for legal agency should necessarily disqualify one from legal culpability when sufficiently compromised. Thus one’s view of legal insanity ought to reflect whatever one thinks are the overall purposes of the criminal law.  If the purpose of criminal punishment is social order, then legal agency entails the capacity to be law-abiding such that one does not undermine the social order. If the purpose is institutionalized moral blame for wrongful acts, then legal agency entails the capacities for moral agency. If a criminal code embraces a hybrid theory of criminal law, then all of these capacities are relevant to legal agency.

In this chapter we will argue that the capacities necessary to moral and legal agency can be understood as executive functions in the brain.

The chapter is here.

Saturday, May 2, 2015

Free Will and Autonomous Medical DecisionMaking

Butkus, Matthew A. 2015. “Free Will and Autonomous Medical Decision-Making.”
Journal of Cognition and Neuroethics 3 (1): 75–119.

Abstract

Modern medical ethics makes a series of assumptions about how patients and their care providers make decisions about forgoing treatment. These assumptions are based on a model of thought and cognition that does not reflect actual cognition—it has substituted an ideal moral agent for a practical one. Instead of a purely rational moral agent, current psychology and neuroscience have shown that decision-making reflects a number of different factors that must be considered when conceptualizing autonomy. Multiple classical and contemporary discussions of autonomy and decision-making are considered and synthesized into a model of cognitive autonomy. Four categories of autonomy criteria are proposed to reflect current research in cognitive psychology and common clinical issues.

The entire article is here.

Wednesday, June 4, 2014

Healthy behavior matters. So are we responsible if we get sick?

By Bill Gardner
The Incidental Economist
Originally published May 30, 2014

I have been warned my whole life that I shouldn’t smoke. The evidence that smoking affects health is overwhelming. Suppose I understand all this, but I smoke anyway. And then I get lung cancer. Am I responsible for what happened to me, given that I was aware of the consequences yet behaved recklessly anyway?

Whether we are responsible for our health affects how we think about health policy. The ACA subsidizes insurance, and thus the cost of health care, for millions of Americans. Many people feel that it is right to care for those who are ill through no fault of own, but they do not understand why they should be responsible when someone becomes sick through reckless behaviour or self-indulgence. Our intuition is that such people are (to some degree) morally responsible for their fate.

The entire article is here.

Friday, May 30, 2014

Now The Military Is Going To Build Robots That Have Morals

By Patrick Tucker
Defense One
Originally posted May 13, 2014

Are robots capable of moral or ethical reasoning? It’s no longer just a question for tenured philosophy professors or Hollywood directors. This week, it’s a question being put to the United Nations.

The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.

The entire article is here.

Friday, May 16, 2014

Blaming the Kids: Children’s Agency and Diminished Responsibility

By Michael Tiboris
Journal of Applied Philosophy,Vol. 31, No. 1, 2014
doi: 10.1111/japp.12046

Abstract

Children are less blameworthy for their beliefs and actions because they are young. But the relationship between development and responsibility is complex. What exactly grounds the excuses we rightly give to young agents? This article presents three distinct arguments for children's diminished responsibility. Drawing on significant resources from developmental psychology, it rejects views which base the normative adult/child distinction on children's inability to participate in certain kinds of moral communication or to form principled self-conceptions which guide their actions. The article then argues that children's responsibility ought to be diminished because (and to the degree that) they are less competent at using features of their moral agency to meet social demands. This ‘normative competence’ view is philosophically defensible, supported by research in developmental psychology, and provides us with a method to evaluate whether things like peer pressure are relevant to responsibility.

The entire article is here.

Saturday, May 10, 2014

We may never teach robots about love, but what about ethics?

Do androids dream of electric Kant?

By Emma Wooliacott
New Statesman
Originally published May 6, 2014

Here are two excerpts:

But as AJung Moon of the University of British Columbia, points out, "It's really hard to create a robot that would have the same sense of moral agency as a human being. Part of the reason is that people can't even agree on what is the right thing to do. What would be the benchmark?"

Her latest research, led by colleague Ergun Calisgan, takes a pragmatic approach to the problem by examining a robot tasked with delivering a package in a building with only one small lift. How should it act? Should it push ahead of a waiting human? What if its task is urgent? What if the person waiting is in a wheelchair?

(cut)

Indeed, professor Ronald Craig Arkin of the Georgia Institute of Technology has proposed an "ethical adaptor", designed give a military robot what he describes as a sense of guilt. It racks up, according to a pre-determined formula, as the robot perceives after an event that it has violated the rules of engagement - perhaps by killing a civilian in error - or if it is criticised by its own side. Once its guilt reaches a certain pre-determined level, the robot is denied permission to fire.

The entire article is here.

Saturday, February 15, 2014

Paul Russell on Free Will and Responsibility

Many philosophical theories try to evade the uncomfortable truth that luck and fate play a role in the conduct of our moral lives, argues philosopher Paul Russell. He chooses the best books on free will and responsibility.

Fivebooks.com
Interview by Nigel Warburton
Originally published December 3, 2013

Here is an excerpt:

Q: Most people feel, to some degree, in control of how they behave. There may be moments when they become irrational and other forces take over,  or where outside people force them to do things, but if I want to raise my hand or say “Stop!” those things seem to be easily within my conscious control. We also feel very strongly that people, including ourselves, merit praise and blame for the actions they perform because it’s us that’s performing them. It’s not someone else doing those things. And if we do something wrong, knowingly, it’s right to blame us for that.

A: That’s right. The common sense view — although we may articulate it in different ways in different cultures — is that there is some relevant sense in which we are in control and we are morally accountable. What makes philosophy interesting is that sceptical arguments can be put forward that appear to undermine or discredit our confidence in this common sense position. One famous version of this difficulty has theological roots. If, as everyone once assumed, there is a God, who creates the world and has the power to decide all that happens in it, then our common sense view of ourselves as free agents seems to be threatened, since God controls and guides everything that happens – including all our actions. Similar or related problems seem to arise with modern science.

The entire interview is here.

Sunday, February 2, 2014

The Distinction Between Antisociality And Mental Illness

By Abigail Marsh
Edge.org
Originally published January 15, 2014

Here is an excerpt:

Cognitive biases include widespread tendencies to view actions that cause harm to others as fundamentally more intentional and blameworthy than identical actions that happen not to result in harm to others, as has been shown by Joshua Knobe and others in investigations of the "side-effect effect", and to view agents who cause harm as fundamentally more capable of intentional and goal-directed behavior than those who incur harm, as has been shown by Kurt Gray and others in investigations of distinction between moral agents and moral patients. These biases dictate that an individual who is predisposed to behavior that harms others as a result of genetic and environmental risk factors will be inherently viewed as more responsible for his or her behaviors than another individual predisposed to behavior that harms himself as a result of similar genetic and environmental risk factors. The tendency to view those who harm others as responsible for their actions, and thus blameworthy, may reflect seemingly evolved tendencies to reinforce social norms by blaming and punishing wrongdoers for their misbehavior.

The entire blog post is here.

Sunday, January 19, 2014

Sanity of Psychologist’s Killer Is Again at Issue

By JAMES C. McKINLEY Jr.
The New York Times
Published: January 2, 2014

The mental health of a man accused of killing a psychologist in her Upper East Side office was once again in question on Thursday, just as a judge in Manhattan was about to set a date for a new trial because the first one ended in a hung jury.

Lawyers for the man, David Tarloff, 45, said during a hearing on Thursday that a court-appointed psychiatrist at Bellevue Hospital Center had found him unfit to stand trial during an examination in November.

The entire story is here.

Sunday, January 5, 2014

Moral Responsibility and PAP

By Heath White
PEA Soup
A blog dedicated to philosophy, ethics, and academia
Originally posted December 19, 2013

Does moral responsibility require the ability to do otherwise?  For example, must one have been able to refrain from an evil deed if one is to be appropriately blamed for it?  The answer turns on the truth of a familiar principle:

(PAP) If S is blameworthy for doing X, S must have been able to do otherwise than X.

The traditional view is that (PAP) is true; Frankfurt argued that it was false, with a form of example which is still widely discussed.  I’m going to argue for Frankfurt’s conclusion in a way that has nothing to do with Frankfurt-style examples.  I’d be interested in feedback.

Blaming (or punishing) someone for failing to live up to a moral standard is a special case of a more general phenomenon.  There are many cases where there is some kind of requirement, someone fails to live up to it, and negative consequences are imposed as a result.  It is instructive to look at how we view “couldn’t have done otherwise” in these other cases.

The entire blog post is here.

Wednesday, January 1, 2014

The Essential Moral Self

Strohminger, N. and Nichols, S. (in press).
The Essential Moral Self. Cognition.

Abstract

It has often been suggested that the mind is central to personal identity.  But do all parts of the mind contribute equally? Across five experiments, we demonstrate that moral traits—more than any other mental faculty— are considered the most essential part of identity, the self, and the soul.  Memory, especially emotional and autobiographical memory, is also fairly important. Lower-level cognition and perception have the most tenuous connection to identity, rivaling that of purely physical traits. These findings suggest that folk notions of personal identity are largely informed by the mental faculties affecting social relationships, with a particularly keen focus on moral traits.

(cut)

Discussion

The studies described here illustrate several points about lay theories of personal identity. The first, most basic, point is that not all parts of the mind are equally constitutive of the self, challenging a straightforward view of psychological continuity. Identity does not simply depend on the magnitude of retained mental content; indeed, certain cognitive processes contribute less to identity than purely physical traits.

Across five experiments, we find strong and unequivocal support for the essential moral self hypothesis. Moral traits are considered more important to personal identity than any other part of the mind.

The entire article is here.