Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Philosophy. Show all posts
Showing posts with label Moral Philosophy. Show all posts

Sunday, November 5, 2023

Is Applied Ethics Morally Problematic?

Franz, D.J.
J Acad Ethics 20, 359–374 (2022).


This paper argues that applied ethics can itself be morally problematic. As illustrated by the case of Peter Singer’s criticism of social practice, morally loaded communication by applied ethicists can lead to protests, backlashes, and aggression. By reviewing the psychological literature on self-image, collective identity, and motivated reasoning three categories of morally problematic consequences of ethical criticism by applied ethicists are identified: serious psychological discomfort, moral backfiring, and hostile conflict. The most worrisome is moral backfiring: psychological research suggests that ethical criticism of people’s central moral convictions can reinforce exactly those attitudes. Therefore, applied ethicists unintentionally can contribute to a consolidation of precisely those social circumstances that they condemn to be unethical. Furthermore, I argue that the normative concerns raised in this paper are not dependent on the commitment to one specific paradigm in moral philosophy. Utilitarianism, Aristotelian virtue ethics, and Rawlsian contractarianism all provide sound reasons to take morally problematic consequences of ethical criticism seriously. Only the case of deontological ethics is less clear-cut. Finally, I point out that the issues raised in this paper provide an excellent opportunity for further interdisciplinary collaboration between applied ethics and social sciences. I also propose strategies for communicating ethics effectively.

Here is my summary:

First, ethical criticism can cause serious psychological discomfort. People often have strong emotional attachments to their moral convictions, and being told that their beliefs are wrong can be very upsetting. In some cases, ethical criticism can even lead to anxiety, depression, and other mental health problems.

Second, ethical criticism can lead to moral backfiring. This is when people respond to ethical criticism by doubling down on their existing beliefs. Moral backfiring is thought to be caused by a number of factors, including motivated reasoning and the need to maintain a positive self-image.

Third, ethical criticism can lead to hostile conflict. When people feel threatened by ethical criticism, they may become defensive and aggressive. This can lead to heated arguments, social isolation, and even violence.

Franz argues that these negative consequences are not just hypothetical. He points to a number of real-world examples, such as the backlash against Peter Singer's arguments for vegetarianism.

The author concludes by arguing that applied ethicists should be aware of the ethical dimension of their own work. They should be mindful of the potential for their work to cause harm, and they should take steps to mitigate these risks. For example, applied ethicists should be careful to avoid making personal attacks on those who disagree with them. They should also be willing to engage in respectful dialogue with those who have different moral views.

Saturday, December 17, 2022

Interaction between games give rise to the evolution of moral norms of cooperation

Salahshour M (2022)
PLoS Comput Biol 18(9): e1010429.


In many biological populations, such as human groups, individuals face a complex strategic setting, where they need to make strategic decisions over a diverse set of issues and their behavior in one strategic context can affect their decisions in another. This raises the question of how the interaction between different strategic contexts affects individuals’ strategic choices and social norms? To address this question, I introduce a framework where individuals play two games with different structures and decide upon their strategy in a second game based on their knowledge of their opponent’s strategy in the first game. I consider both multistage games, where the same opponents play the two games consecutively, and reputation-based model, where individuals play their two games with different opponents but receive information about their opponent’s strategy. By considering a case where the first game is a social dilemma, I show that when the second game is a coordination or anti-coordination game, the Nash equilibrium of the coupled game can be decomposed into two classes, a defective equilibrium which is composed of two simple equilibrium of the two games, and a cooperative equilibrium, in which coupling between the two games emerge and sustain cooperation in the social dilemma. For the existence of the cooperative equilibrium, the cost of cooperation should be smaller than a value determined by the structure of the second game. Investigation of the evolutionary dynamics shows that a cooperative fixed point exists when the second game belongs to coordination or anti-coordination class in a mixed population. However, the basin of attraction of the cooperative fixed point is much smaller for the coordination class, and this fixed point disappears in a structured population. When the second game belongs to the anti-coordination class, the system possesses a spontaneous symmetry-breaking phase transition above which the symmetry between cooperation and defection breaks. A set of cooperation supporting moral norms emerges according to which cooperation stands out as a valuable trait. Notably, the moral system also brings a more efficient allocation of resources in the second game. This observation suggests a moral system has two different roles: Promotion of cooperation, which is against individuals’ self-interest but beneficial for the population, and promotion of organization and order, which is at both the population’s and the individual’s self-interest. Interestingly, the latter acts like a Trojan horse: Once established out of individuals’ self-interest, it brings the former with itself. Importantly, the fact that the evolution of moral norms depends only on the cost of cooperation and is independent of the benefit of cooperation implies that moral norms can be harmful and incur a pure collective cost, yet they are just as effective in promoting order and organization. Finally, the model predicts that recognition noise can have a surprisingly positive effect on the evolution of moral norms and facilitates cooperation in the Snow Drift game in structured populations.

Author summary

How do moral norms spontaneously evolve in the presence of selfish incentives? An answer to this question is provided by the observation that moral systems have two distinct functions: Besides encouraging self-sacrificing cooperation, they also bring organization and order into the societies. In contrast to the former, which is costly for the individuals but beneficial for the group, the latter is beneficial for both the group and the individuals. A simple evolutionary model suggests this latter aspect is what makes a moral system evolve based on the individuals’ self-interest. However, a moral system behaves like a Trojan horse: Once established out of the individuals’ self-interest to promote order and organization, it also brings self-sacrificing cooperation.

Saturday, March 19, 2022

The Content of Our Character

Brown, Teneille R.
Available at SSRN: https://ssrn.com/abstract=3665288


The rules of evidence assume that jurors can ignore most character evidence, but the data are clear. Jurors simply cannot *not* make character inferences. We are so driven to use character to assess blame, that we will spontaneously infer traits based on whatever limited information is available. In fact, within just 0.1 seconds of meeting someone, we have already decided if we think they are intelligent, trustworthy, likable, or kind--based just on the person’s face. This is a completely unregulated source of evidence, and yet it predicts teaching evaluations, electoral success, and even sentencing decisions. Given the pervasive and unintentional nature of “spontaneous trait inferences” (STIs), they are not susceptible to mitigation through jury instructions. However, recognizing that witnesses will be viewed as more or less trustworthy based just on their face, the rules of evidence must permit more character evidence, rather than less. This article harnesses undisputed findings from social psychology to propose a reversal of the ban on character evidence, in favor of a strong presumption against admissibility for immoral traits only. This removes a great deal from the rule’s crosshairs and re-tethers it to its normative roots. My proposal does not rely on the gossamer thin distinction between propensity and non-propensity uses, because once jurors hear about past act evidence, they will subconsciously draw an impermissible character inference. However, in some cases this might not be unfairly prejudicial, and may even be necessary for justice. The critical contribution of this article is that while shielding jurors from character evidence has noble origins, it also has unintended, negative consequences. When jurors cannot hear about how someone acted in the past, they will instead rely on immutable facial features—connected to racist, sexist and classist stereotypes—to draw character inferences that are even more inaccurate and unfair.

Here is a section

Moral Character Impacts Ratings of Intent

Previous models of intentionality held that for an act to be considered intentional, three things had to be present. The actor must have believed that an action would result in a particular outcome, desired this outcome, and had full awareness of his behavior. Research now challenges this account, “showing that individuals attribute intentions to others even (and largely) in the absence of these components.”  Even where an actor could not have acted otherwise, and thus was coerced to kill, study participants found the actor to be more morally responsible for an act if he “identified” with it, meaning that he desired the compelled outcome. These findings do not fit with our typical model of blame, which requires freedom to act in order to assign responsibility.  However, they make sense if we adopt a character-based approach to
blame. We are quick to infer a bad character and intent when there is very little evidence of it.  

An example of this is the hindsight bias called the “praise-blame asymmetry,” where people blame actors for accidental bad outcomes that they caused but did not intend, but do not praise people for accidental good outcomes that they likewise caused but did not intend. The classic example is the CEO who considers a development project that will increase profits. The CEO is agnostic to the project’s environmental effects and gives it the go-ahead. If the project’s outcome turns out to harm the environment, people say the CEO intended the bad outcome and they blame him for it. However, if instead the project turns out to benefit the environment, the CEO receives no praise. Our folk conception of intentionality is tied to morality and aversion to negative outcomes. If a foreseen outcome is negative, people will attribute intentionality to the decision-maker, but not if the foreseen outcome is positive; the overattribution of intent only seems to cut one way. Mens rea ascriptions are “sensitive to moral valence . . . . If the outcome is negative, foreknowledge standardly suffices for people to ascribe intentionality.” This effect has been found not just in laypeople, but also in French judges. If an action is considered immoral, then our emotional reaction to it can bias mental state ascriptions.

Monday, February 3, 2020

Explaining moral behavior: A minimal moral model.

Osman, M., & Wiegmann, A.
Experimental Psychology (2017)
64(2), 68-81.


In this review we make a simple theoretical argument which is that for theory development, computational modeling, and general frameworks for understanding moral psychology researchers should build on domain-general principles from reasoning, judgment, and decision-making research. Our approach is radical with respect to typical models that exist in moral psychology that tend to propose complex innate moral grammars and even evolutionarily guided moral principles. In support of our argument we show that by using a simple value-based decision model we can capture a range of core moral behaviors. Crucially, the argument we propose is that moral situations per se do not require anything specialized or different from other situations in which we have to make decisions, inferences, and judgments in order to figure out how to act.

From the Implications section:

If instead moral behavior is viewed as a domain-general process, the findings can easily be accounted for based on existing literature from judgment and decision-making research such as Tversky’s (1969) work on intransitive preferences.

The same benefits of this research approach extend to the moral philosophy domain. As we described at the beginning of the paper, empirical research can inform philosophers as to which moral intuitions are likely to be biased. If moral judgments, decisions, and behavior can be captured by well-developed domain-general theories then our theoretical and empirical resources for gaining  knowledge about moral intuitions would be much greater, as compared to the recourses provided by moral psychology alone.

The paper can be downloaded here.

Saturday, May 11, 2019

Free Will, an Illusion? An Answer from a Pragmatic Sentimentalist Point of View

Maureen Sie
Appears in : Caruso, G. (ed.), June 2013, Exploring the Illusion of Free Will and Moral Responsibility, Rowman & Littlefield.

According to some people, diverse findings in the cognitive and neurosciences suggest that free will is an illusion: We experience ourselves as agents, but in fact our brains decide, initiate, and judge before ‘we’ do (Soon, Brass, Heinze and Haynes 2008; Libet and Gleason 1983). Others have replied that the distinction between ‘us’ and ‘our brains’ makes no sense (e.g., Dennett 2003)  or that scientists misperceive the conceptual relations that hold between free will and responsibility (Roskies 2006). Many others regard the neuro-scientific findings as irrelevant to their views on free will. They do not believe that determinist processes are incompatible with free will to begin with, hence, do not understand why deterministic processes in our brain would be (see Sie and Wouters 2008, 2010). That latter response should be understood against the background of the philosophical free will discussion. In philosophy, free will is traditionally approached as a metaphysical problem, one that needs to be dealt with in order to discuss the legitimacy of our practices of responsibility. The emergence of our moral practices is seen as a result of the assumption that we possess free will (or some capacity associated with it) and the main question discussed is whether that assumption is compatible with determinism.  In this chapter we want to steer clear from this 'metaphysical' discussion.

The question we are interested in in this chapter, is whether the above mentioned scientific findings are relevant to our use of the concept of free will when that concept is approached from a different angle. We call this different angle the 'pragmatic sentimentalist'-approach to free will (hereafter the PS-approach).  This approach can be traced back to Peter F. Strawson’s influential essay “Freedom and Resentment”(Strawson 1962).  Contrary to the metaphysical approach, the PS-approach does not understand free will as a concept that somehow precedes our moral practices. Rather it is assumed that everyday talk of free will naturally arises in a practice that is characterized by certain reactive attitudes that we take towards one another. This is why it is called 'sentimentalist.' In this approach, the practical purposes of the concept of free will are put central stage. This is why it is called 'pragmatist.'

A draft of the book chapter can be downloaded here.

Friday, November 3, 2017

A fundamental problem with Moral Enhancement

Joao Fabiano
Practical Ethics
Originally posted October 13, 2017

Moral philosophers often prefer to conceive thought experiments, dilemmas and problem cases of single individuals who make one-shot decisions with well-defined short-term consequences. Morality is complex enough that such simplifications seem justifiable or even necessary for philosophical reflection.  If we are still far from consensus on which is the best moral theory or what makes actions right or wrong – or even if such aspects should be the central problem of moral philosophy – by considering simplified toy scenarios, then introducing group or long-term effects would make matters significantly worse. However, when it comes to actually changing human moral dispositions with the use of technology (i.e., moral enhancement), ignoring the essential fact that morality deals with group behaviour with long-ranging consequences can be extremely risky. Despite those risks, attempting to provide a full account of morality in order to conduct moral enhancement would be both simply impractical as well as arguably risky. We seem to be far away from such account, yet there are pressing current moral failings, such as the inability for proper large-scale cooperation, which makes the solution to present global catastrophic risks, such as global warming or nuclear war, next to impossible. Sitting back and waiting for a complete theory of morality might be riskier than attempting to fix our moral failing using incomplete theories. We must, nevertheless, proceed with caution and an awareness of such incompleteness. Here I will present several severe risks from moral enhancement that arise from focusing on improving individual dispositions while ignoring emergent societal effects and point to tentative solutions to those risks. I deem those emergent risks fundamental problems both because they lie at the foundation of the theoretical framework guiding moral enhancement – moral philosophy – and because they seem, at the time, inescapable; my proposed solution will aim at increasing awareness of such problems instead of directly solving them.

The article is here.

Thursday, July 28, 2016

Driverless Cars: Can There Be a Moral Algorithm?

By Daniel Callahan
The Hastings Center
Originally posted July 5, 2016

Here is an excerpt:

The surveys also showed a serious tension between reducing pedestrians deaths while maximizing the driver’s personal protection. Drivers will want the latter, but regulators might come out on the utilitarian side, reducing harm to others. The researchers conclude by saying that a “moral algorithm” to take account of all these variation is needed, and that they “will need to tackle more intricate decisions than those considered in our survey.” As if there were not enough already.

Just who is to do the tackling? And how can an algorithm of that kind be created?  Joshua Greene has a decisive answer to those questions: “moral philosophers.” Speaking as a member of that tribe, I feel flattered. He does, however, get off on the wrong diplomatic foot by saying that “software engineers–unlike politicians, philosophers, and opinionated uncles—don’t have the luxury of vague abstractions.” He goes on to set a high bar to jump. The need is for “moral theories or training criteria sufficiently precise to determine exactly which rights people have, what virtue requires, and what tradeoffs are just.” Exactly!

I confess up front that I don’t think we can do it.  Maybe people in Greene’s professional tribe turn out exact algorithms with every dilemma they encounter.  If so, we envy them for having all the traits of software engineers.  No such luck for us. We will muddle through on these issues as we have always done—muddle through because exactness is rare (and its claimants suspect), because the variables will all change over time, and because there is varied a set of actors (drivers, manufacturers, purchasers, and insurers) each with different interests and values.

The article is here.

Saturday, May 9, 2015

The Point of Studying Ethics According to Kant

Lucas Thorpe
The Journal of Value Inquiry (2006) 40:461–474
DOI 10.1007/s10790-006-9002-3

Many readers of Kant’s ethical writings take him to be primarily concerned with offering guidelines for action. At the least, they write about Kant as if this were the purpose of his ethical writings. For example, Christine Korsgaard, in her influential article Kant’s Analysis of Obligation: The Argument of Groundwork I, writes that, ‘‘the argument of Groundwork I is an attempt to give what I call a ‘motivational analysis’ of the concept of a right action, in order to discover what that concept applies to, that is, which actions are right.’’  Similar comments are not hard to find in the secondary literature. This, however, is a fundamentally misguided way of reading Kant, since he repeatedly asserts that we do not need to do moral philosophy in order to discover which actions are right.  We already know how to behave morally and do not need philosophers to tell us this. ‘‘Common human reason,’’ Kant argues, ‘‘knows very well how to distinguish in every case that comes up what is good and what is evil, what is in conformity to duty or contrary to duty.’’  Because people with pre-philosophical understanding know how to act morally, the purpose of moral philosophy cannot be to provide us with a set of rules for correct behavior. If we take Kant’s claims about common human reason seriously, then his aim in the Groundwork of the Metaphysics of Morals cannot be to discover which actions are right.

The article is here.

Thursday, March 19, 2015

On holding ethicists to higher moral standards and the value of moral inconsistency

By Carissa Véliz
Practical Ethics
Originally posted February 27, 2015

Here is an excerpt:

Should ethicists be held to higher moral standards? If they commit a wrong about which they know more than others, then it is seems plausible that they do have more responsibility and should be held to higher moral standards. In many cases, however, moral philosophers appear to be on a par with non-ethicists when it comes to ethical knowledge. Most people who cheat on their spouses, for example, have roughly the same knowledge of the wrong they are committing; this includes moral philosophers, since the ethics of faithfulness is not frequently discussed in academic settings; nor is it something most moral philosophers read or write about.

The entire article is here.

A similar paper, The Self-Reported Moral Behavior of Ethics Professors, by Eric Schwitzgebel and Joshua Rust can be found here.

Thursday, November 20, 2014

Teaching Moral Values

Panellists: Michael Portillo, Anne McElvoy, Claire Fox and Giles Fraser

Witnesses: Adrian Bishop, Dr. Sandra Cooke, Professor Jesse Prinz and Dr. Ralph Levinson

Teaching your children a set of moral values to live their lives by is arguably one of the most important aspects of being a parent - and for some, one of the most neglected. In Japan that job could soon be handed to teachers and become part of the school curriculum. The Central Council for Education is making preparations to introduce moral education as an official school subject, on a par with traditional subjects like Japanese, mathematics and science. In a report the council says that since moral education plays an important role not only in helping children realise a better life for themselves but also in ensuring sustainable development of the Japanese state and society, so it should to taught more formally and the subject codified. The prospect of the state defining a set of approved values to be taught raises some obvious questions, but is it very far away from what we already accept? School websites often talk of their "moral ethos". The much quoted aphorism "give me the child until he is seven and I'll give you the man" is attributed to the Jesuits and why are church schools so popular if it's not for their faith based ethos? Moral philosophy is an enormously diverse subject, but why not use it to give children a broad set of tools and questions to ask, to help them make sense of a complex and contradictory world? If we try and make classrooms morally neutral zones are we just encouraging moral relativism? Our society is becoming increasingly secular and finding it hard to define a set of common values. As another disputed epigram puts it "When men stop believing in God, they don't believe in nothing. They believe in anything."

Could moral education fill the moral vacuum?

Moral Maze - Presented by Michael Buerk

The audio file can be accessed here.