Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Reasoning. Show all posts
Showing posts with label Moral Reasoning. Show all posts

Sunday, November 5, 2023

Is Applied Ethics Morally Problematic?

Franz, D.J.
J Acad Ethics 20, 359–374 (2022).
https://doi.org/10.1007/s10805-021-09417-1

Abstract

This paper argues that applied ethics can itself be morally problematic. As illustrated by the case of Peter Singer’s criticism of social practice, morally loaded communication by applied ethicists can lead to protests, backlashes, and aggression. By reviewing the psychological literature on self-image, collective identity, and motivated reasoning three categories of morally problematic consequences of ethical criticism by applied ethicists are identified: serious psychological discomfort, moral backfiring, and hostile conflict. The most worrisome is moral backfiring: psychological research suggests that ethical criticism of people’s central moral convictions can reinforce exactly those attitudes. Therefore, applied ethicists unintentionally can contribute to a consolidation of precisely those social circumstances that they condemn to be unethical. Furthermore, I argue that the normative concerns raised in this paper are not dependent on the commitment to one specific paradigm in moral philosophy. Utilitarianism, Aristotelian virtue ethics, and Rawlsian contractarianism all provide sound reasons to take morally problematic consequences of ethical criticism seriously. Only the case of deontological ethics is less clear-cut. Finally, I point out that the issues raised in this paper provide an excellent opportunity for further interdisciplinary collaboration between applied ethics and social sciences. I also propose strategies for communicating ethics effectively.


Here is my summary:

First, ethical criticism can cause serious psychological discomfort. People often have strong emotional attachments to their moral convictions, and being told that their beliefs are wrong can be very upsetting. In some cases, ethical criticism can even lead to anxiety, depression, and other mental health problems.

Second, ethical criticism can lead to moral backfiring. This is when people respond to ethical criticism by doubling down on their existing beliefs. Moral backfiring is thought to be caused by a number of factors, including motivated reasoning and the need to maintain a positive self-image.

Third, ethical criticism can lead to hostile conflict. When people feel threatened by ethical criticism, they may become defensive and aggressive. This can lead to heated arguments, social isolation, and even violence.

Franz argues that these negative consequences are not just hypothetical. He points to a number of real-world examples, such as the backlash against Peter Singer's arguments for vegetarianism.

The author concludes by arguing that applied ethicists should be aware of the ethical dimension of their own work. They should be mindful of the potential for their work to cause harm, and they should take steps to mitigate these risks. For example, applied ethicists should be careful to avoid making personal attacks on those who disagree with them. They should also be willing to engage in respectful dialogue with those who have different moral views.

Wednesday, March 16, 2022

Autonomy and the Folk Concept of Valid Consent

Demaree-Cotton, J., & Sommers, R. 
(2021, August 17). 
https://doi.org/10.31234/osf.io/p4w8g

Abstract

Consent governs innumerable everyday social interactions, including sex, medical exams, the use of property, and economic transactions. Yet little is known about how ordinary people reason about the validity of consent. Across the domains of sex, medicine, and police entry, Study 1 showed that when agents lack autonomous decision-making capacities, participants are less likely to view their consent as valid; however, failing to exercise this capacity and deciding in a nonautonomous way did not reduce consent judgments. Study 2 found that specific and concrete incapacities reduced judgments of valid consent, but failing to exercise these specific capacities did not, even when the consenter makes an irrational and inauthentic decision. Finally, Study 3 showed that the effect of autonomy on judgments of valid consent carries important downstream consequences for moral reasoning about the rights and obligations of third parties, even when the consented-to action is morally wrong. Overall, these findings suggest that laypeople embrace a normative, domain-general concept of valid consent that depends consistently on the possession of autonomous capacities, but not on the exercise of these capacities. Autonomous decisions and autonomous capacities thus play divergent roles in moral reasoning about consent interactions: while the former appears relevant for assessing the wrongfulness of consented-to acts, the latter plays a role in whether consent is regarded as authoritative and therefore as transforming moral rights.

Conclusion 

Before these studies, it remained an open possibility that “valid consent” as a rich and normatively complex force existed only as a technical concept used in philosophical, legal and academic domains. We found, however, that the folk concept of consent involves normative distinctions between valid and invalid consent that are sensitive to the consenter’s autonomy, even if the linguistic utterance of “yes” is held constant, and that this concept plays an important role in moral reasoning. 

Specifically, the studies presented here examined the relationship between autonomy and intuitive judgments of valid consent in several domains: medical procedures, sexual relations, police searches, and agreements between buyers and sellers.  Across scenarios, we found that judgments of valid consent carried a specific relationship to autonomy: whether an agent possesses the mental capacity to make decisions in an autonomous way has a consistent impact on whether their consent is regarded as valid, and thus whether it was regarded as morally transformative of the rights and obligations of the consenter and of third parties.  Yet, whether the agent in fact makes their decision in an autonomous, rational way—based on their own authentic values and what is right for them—has little impact on perceptions of consent or associated rights, although it has relevance for whether the consent-obtainer is acting wrongly.  Autonomy thus has a subtle role in the ordinary reasoning about morally transformative consent, where consent given by an agent with autonomous capacities has a distinctive role in downstream moral reasoning.

Wednesday, February 10, 2021

Beyond Moral Dilemmas: The Role of Reasoning in Five Categories of Utilitarian Judgment

F. Jaquet & F. Cova
Cognition, Volume 209, 
April 2021, 104572

Abstract

Over the past two decades, the study of moral reasoning has been heavily influenced by Joshua Greene’s dual-process model of moral judgment, according to which deontological judgments are typically supported by intuitive, automatic processes while utilitarian judgments are typically supported by reflective, conscious processes. However, most of the evidence gathered in support of this model comes from the study of people’s judgments about sacrificial dilemmas, such as Trolley Problems. To which extent does this model generalize to other debates in which deontological and utilitarian judgments conflict, such as the existence of harmless moral violations, the difference between actions and omissions, the extent of our duties of assistance, and the appropriate justification for punishment? To find out, we conducted a series of five studies on the role of reflection in these kinds of moral conundrums. In Study 1, participants were asked to answer under cognitive load. In Study 2, participants had to answer under a strict time constraint. In Studies 3 to 5, we sought to promote reflection through exposure to counter-intuitive reasoning problems or direct instruction. Overall, our results offer strong support to the extension of Greene’s dual-process model to moral debates on the existence of harmless violations and partial support to its extension to moral debates on the extent of our duties of assistance.

From the Discussion Section

The results of  Study 1 led  us to conclude  that certain forms  of utilitarian judgments  require more cognitive resources than their deontological counterparts. The results of Studies 2 and 5 suggest  that  making  utilitarian  judgments  also  tends  to  take  more  time  than  making deontological judgments. Finally, because our manipulation was unsuccessful, Studies 3 and 4 do not allow us to conclude anything about the intuitiveness of utilitarian judgments. In Study 5, we were nonetheless successful in manipulating participants’ reliance on their intuitions (in the  Intuition  condition),  but this  did not  seem to  have any  effect on  the  rate  of utilitarian judgments.  Overall,  our  results  allow  us  to  conclude  that,  compared  to  deontological judgments,  utilitarian  judgments  tend  to  rely  on  slower  and  more  cognitively  demanding processes.  But  they  do  not  allow  us  to  conclude  anything  about  characteristics  such  as automaticity  or  accessibility  to consciousness:  for  example,  while solving  a very  difficult math problem might take more time and require more resources than solving a mildly difficult math problem, there is no reason to think that the former process is more automatic and more conscious than the latter—though it will clearly be experienced as more effortful. Moreover, although one could be tempted to  conclude that our data show  that, as predicted by Greene, utilitarian  judgment  is  experienced  as  more  effortful  than  deontological  judgment,  we collected no data  about such  experience—only data  about the  role of cognitive resources in utilitarian  judgment.  Though  it  is  reasonable  to  think  that  a  process  that  requires  more resources will be experienced as more effortful, we should keep in mind that this conclusion is based on an inferential leap. 

Saturday, November 14, 2020

Do ethics classes influence student behavior? Case study: Teaching the ethics of eating meat

Schwitzgebel, E. et al.
Cognition
Volume 203, October 2020

Abstract

Do university ethics classes influence students' real-world moral choices? We aimed to conduct the first controlled study of the effects of ordinary philosophical ethics classes on real-world moral choices, using non-self-report, non-laboratory behavior as the dependent measure. We assigned 1332 students in four large philosophy classes to either an experimental group on the ethics of eating meat or a control group on the ethics of charitable giving. Students in each group read a philosophy article on their assigned topic and optionally viewed a related video, then met with teaching assistants for 50-minute group discussion sections. They expressed their opinions about meat ethics and charitable giving in a follow-up questionnaire (1032 respondents after exclusions). We obtained 13,642 food purchase receipts from campus restaurants for 495 of the students, before and after the intervention. Purchase of meat products declined in the experimental group (52% of purchases of at least $4.99 contained meat before the intervention, compared to 45% after) but remained the same in the control group (52% both before and after). Ethical opinion also differed, with 43% of students in the experimental group agreeing that eating the meat of factory farmed animals is unethical compared to 29% in the control group. We also attempted to measure food choice using vouchers, but voucher redemption rates were low and no effect was statistically detectable. It remains unclear what aspect of instruction influenced behavior.

Friday, August 7, 2020

Technology Can Help Us, but People Must Be the Moral Decision Makers

Image for postAndrew Briggs
medium.com
Originally posted 8 June 20

Here is an excerpt:

Many individuals in technology fields see tools such as machine learning and AI as precisely that — tools — which are intended to be used to support human endeavors, and they tend to argue how such tools can be used to optimize technical decisions. Those people concerned with the social impacts of these technologies tend to approach the debate from a moral stance and to ask how these technologies should be used to promote human flourishing.

This is not an unresolvable conflict, nor is it purely academic. As the world grapples with the coronavirus pandemic, society is increasingly faced with decisions about how technology should be used: Should sick people’s contacts be traced using cell phone data? Should AIs determine who can or cannot work or travel based on their most recent COVID-19 test results? These questions have both technical and moral dimensions. Thankfully, humans have a unique capacity for moral choices in a way that machines simply do not.

One of our findings is that for humanity to thrive in the new digital age, we cannot disconnect our technical decisions and innovations from moral reasoning. New technologies require innovations in society. To think that the advance of technology can be stopped, or that established moral modalities need not be applied afresh to new circumstances, is a fraught path. There will often be tradeoffs between social goals, such as maintaining privacy, and technological goals, such as identifying disease vectors.

The info is here.

Thursday, April 23, 2020

Universalization Reasoning Guides Moral Judgment

Levine, S., Kleiman-Weiner, M., and others
(2020, February 23).
https://doi.org/10.31234/osf.io/p7e6h

Abstract

To explain why an action is wrong, we sometimes say: “What if everybody did that?” In other words, even if a single person’s behavior is harmless, that behavior may be wrong if it would be harmful once universalized. We formalize the process of universalization in a computational model, test its quantitative predictions in studies of human moral judgment, and distinguish it from alternative models. We show that adults spontaneously make moral judgments consistent with the logic of universalization, and that children show a comparable pattern of judgment as early as 4 years old. We conclude that alongside other well-characterized mechanisms of moral judgment, such as outcome-based and rule-based thinking, the logic of universalizing holds an important place in our moral minds.

From the Discussion:

Across five studies, we show that both adults and children sometimes make moral judgments well described by the logic of universalization,  and not by standard outcome, rule or norm-based models of moral judgment.We model participants’ judgment of the moral accept-ability  of  an  action  as  proportional  to  the  change  in expected utility in the hypothetical world where all interested parties feel free to do the action.  This model accounts for the ways in which moral judgment is sensitive to the number of parties hypothetically interested in an action, the threshold at which harmful outcomes occur, and their interaction.  By incorporating data on participants’ subjectively perceived utility functions we can predict their moral judgments of threshold problems with quantitative precision, further validating our pro-posed computational model.

The research is here.

Wednesday, December 11, 2019

Veil-of-ignorance reasoning favors the greater good

Karen Huang, Joshua D. Greene and Max Bazerman
PNAS first published November 12, 2019
https://doi.org/10.1073/pnas.1910125116

Abstract

The “veil of ignorance” is a moral reasoning device designed to promote impartial decision-making by denying decision-makers access to potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here we apply veil-of-ignorance reasoning in a more focused way to specific moral dilemmas, all of which involve a tension between the greater good and competing moral concerns. Across six experiments (N = 5,785), three pre-registered, we find that veil-of-ignorance reasoning favors the greater good. Participants first engaged in veil-of-ignorance reasoning about a specific dilemma, asking themselves what they would want if they did not know who among those affected they would be. Participants then responded to a more conventional version of the same dilemma with a moral judgment, a policy preference, or an economic choice. Participants who first engaged in veil-of-ignorance reasoning subsequently made more utilitarian choices in response to a classic philosophical dilemma, a medical dilemma, a real donation decision between a more vs. less effective charity, and a policy decision concerning the social dilemma of autonomous vehicles. These effects depend on the impartial thinking induced by veil-of-ignorance reasoning and cannot be explained by a simple anchoring account, probabilistic reasoning, or generic perspective-taking. These studies indicate that veil-of-ignorance reasoning may be a useful tool for decision-makers who wish to make more impartial and/or socially beneficial choices.

Significance

The philosopher John Rawls aimed to identify fair governing principles by imagining people choosing their principles from behind a “veil of ignorance,” without knowing their places in the social order. Across 7 experiments with over 6,000 participants, we show that veil-of-ignorance reasoning leads to choices that favor the greater good. Veil-of-ignorance reasoning makes people more likely to donate to a more effective charity and to favor saving more lives in a bioethical dilemma. It also addresses the social dilemma of autonomous vehicles (AVs), aligning abstract approval of utilitarian AVs (which minimize total harm) with support for a utilitarian AV policy. These studies indicate that veil-of-ignorance reasoning may be used to promote decision making that is more impartial and socially beneficial.

Wednesday, August 7, 2019

Veil-of-Ignorance Reasoning Favors the Greater Good

Karen Huang Joshua D. Greene Max Bazerman
PsyArXiv
Originally posted July 2, 2019

Abstract

The “veil of ignorance” is a moral reasoning device designed to promote impartial decision-making by denying decision-makers access to potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here we apply veil-of-ignorance reasoning in a more focused way to specific moral dilemmas, all of which involve a tension between the greater good and competing moral concerns. Across six experiments (N = 5,785), three pre-registered, we find that veil-of-ignorance reasoning favors the greater good. Participants first engaged in veil-of-ignorance reasoning about a specific dilemma, asking themselves what they would want if they did not know who among those affected they would be. Participants then responded to a more conventional version of the same dilemma with a moral judgment, a policy preference, or an economic choice. Participants who first engaged in veil-of-ignorance reasoning subsequently made more utilitarian choices in response to a classic philosophical dilemma, a medical dilemma, a real donation decision between a more vs. less effective charity, and a policy decision concerning the social dilemma of autonomous vehicles. These effects depend on the impartial thinking induced by veil-of-ignorance reasoning and cannot be explained by a simple anchoring account, probabilistic reasoning, or generic perspective-taking. These studies indicate that veil-of-ignorance reasoning may be a useful tool for decision-makers who wish to make more impartial and/or socially beneficial choices.

The research is here.

Saturday, February 23, 2019

The Psychology of Morality: A Review and Analysis of Empirical Studies Published From 1940 Through 2017

Naomi Ellemers, Jojanneke van der Toorn, Yavor Paunov, and Thed van Leeuwen
Personality and Social Psychology Review, 1–35

Abstract

We review empirical research on (social) psychology of morality to identify which issues and relations are well documented by existing data and which areas of inquiry are in need of further empirical evidence. An electronic literature search yielded a total of 1,278 relevant research articles published from 1940 through 2017. These were subjected to expert content analysis and standardized bibliometric analysis to classify research questions and relate these to (trends in) empirical approaches that characterize research on morality. We categorize the research questions addressed in this literature into five different themes and consider how empirical approaches within each of these themes have addressed psychological antecedents and implications of moral behavior. We conclude that some key features of theoretical questions relating to human morality are not systematically captured in empirical research and are in need of further investigation.

Here is a portion of the article:

In sum, research on moral behavior demonstrates that people can be highly motivated to behave morally. Yet, personal convictions, social rules and normative pressures from others, or motivational lapses may all induce behavior that is not considered moral by others and invite self-justifying
responses to maintain moral self-views.

The review article can be downloaded here.

Wednesday, October 3, 2018

Moral Reasoning

Richardson, Henry S.
The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.)

Here are two brief excerpts:

Moral considerations often conflict with one another. So do moral principles and moral commitments. Assuming that filial loyalty and patriotism are moral considerations, then Sartre’s student faces a moral conflict. Recall that it is one thing to model the metaphysics of morality or the truth conditions of moral statements and another to give an account of moral reasoning. In now looking at conflicting considerations, our interest here remains with the latter and not the former. Our principal interest is in ways that we need to structure or think about conflicting considerations in order to negotiate well our reasoning involving them.

(cut)

Understanding the notion of one duty overriding another in this way puts us in a position to take up the topic of moral dilemmas. Since this topic is covered in a separate article, here we may simply take up one attractive definition of a moral dilemma. Sinnott-Armstrong (1988) suggested that a moral dilemma is a situation in which the following are true of a single agent:

  1. He ought to do A.
  2. He ought to do B.
  3. He cannot do both A and B.
  4. (1) does not override (2) and (2) does not override (1).

This way of defining moral dilemmas distinguishes them from the kind of moral conflict, such as Ross’s promise-keeping/accident-prevention case, in which one of the duties is overridden by the other. Arguably, Sartre’s student faces a moral dilemma. Making sense of a situation in which neither of two duties overrides the other is easier if deliberative commensurability is denied. Whether moral dilemmas are possible will depend crucially on whether “ought” implies “can” and whether any pair of duties such as those comprised by (1) and (2) implies a single, “agglomerated” duty that the agent do both A and B. If either of these purported principles of the logic of duties is false, then moral dilemmas are possible.

The entry is here.

Saturday, January 27, 2018

Evolving Morality

Joshua Greene
Aspen Ideas Festival
2017

Human morality is a set of cognitive devices designed to solve social problems. The original moral problem is the problem of cooperation, the “tragedy of the commons” — me vs. us. But modern moral problems are often different, involving what Harvard psychology professor Joshua Greene calls “the tragedy of commonsense morality,” or the problem of conflicting values and interests across social groups — us vs. them. Our moral intuitions handle the first kind of problem reasonably well, but often fail miserably with the second kind. The rise of artificial intelligence compounds and extends these modern moral problems, requiring us to formulate our values in more precise ways and adapt our moral thinking to unprecedented circumstances. Can self-driving cars be programmed to behave morally? Should autonomous weapons be banned? How can we organize a society in which machines do most of the work that humans do now? And should we be worried about creating machines that are smarter than us? Understanding the strengths and limitations of human morality can help us answer these questions.

The one-hour talk on SoundCloud is here.

Wednesday, December 20, 2017

Can psychopathic offenders discern moral wrongs? A new look at the moral/conventional distinction.

Aharoni, E., Sinnott-Armstrong, W., & Kiehl, K. A.
Journal of Abnormal Psychology, 121(2), 484-497. (2012)

Abstract

A prominent view of psychopathic moral reasoning suggests that psychopathic individuals cannot properly distinguish between moral wrongs and other types of wrongs. The present study evaluated this view by examining the extent to which 109 incarcerated offenders with varying degrees of psychopathy could distinguish between moral and conventional transgressions relative to each other and to nonincarcerated healthy controls. Using a modified version of the classic Moral/Conventional Transgressions task that uses a forced-choice format to minimize strategic responding, the present study found that total psychopathy score did not predict performance on the task. Task performance was explained by some individual subfacets of psychopathy and by other variables unrelated to psychopathy, such as IQ. The authors conclude that, contrary to earlier claims, insufficient data exist to infer that psychopathic individuals cannot know what is morally wrong.

The article is here.

Saturday, December 9, 2017

The Root of All Cruelty

Paul Bloom
The New Yorker
Originally published November 20, 2017

Here are two excerpts:

Early psychological research on dehumanization looked at what made the Nazis different from the rest of us. But psychologists now talk about the ubiquity of dehumanization. Nick Haslam, at the University of Melbourne, and Steve Loughnan, at the University of Edinburgh, provide a list of examples, including some painfully mundane ones: “Outraged members of the public call sex offenders animals. Psychopaths treat victims merely as means to their vicious ends. The poor are mocked as libidinous dolts. Passersby look through homeless people as if they were transparent obstacles. Dementia sufferers are represented in the media as shuffling zombies.”

The thesis that viewing others as objects or animals enables our very worst conduct would seem to explain a great deal. Yet there’s reason to think that it’s almost the opposite of the truth.

(cut)

But “Virtuous Violence: Hurting and Killing to Create, Sustain, End, and Honor Social Relationships” (Cambridge), by the anthropologist Alan Fiske and the psychologist Tage Rai, argues that these standard accounts often have it backward. In many instances, violence is neither a cold-blooded solution to a problem nor a failure of inhibition; most of all, it doesn’t entail a blindness to moral considerations. On the contrary, morality is often a motivating force: “People are impelled to violence when they feel that to regulate certain social relationships, imposing suffering or death is necessary, natural, legitimate, desirable, condoned, admired, and ethically gratifying.” Obvious examples include suicide bombings, honor killings, and the torture of prisoners during war, but Fiske and Rai extend the list to gang fights and violence toward intimate partners. For Fiske and Rai, actions like these often reflect the desire to do the right thing, to exact just vengeance, or to teach someone a lesson. There’s a profound continuity between such acts and the punishments that—in the name of requital, deterrence, or discipline—the criminal-justice system lawfully imposes. Moral violence, whether reflected in legal sanctions, the killing of enemy soldiers in war, or punishing someone for an ethical transgression, is motivated by the recognition that its victim is a moral agent, someone fully human.

The article is here.

Saturday, October 28, 2017

Post-conventional moral reasoning is associated with increased ventral striatal activity at rest and during task

Zhuo Fang, Wi Hoon Jung, Marc Korczykowski, Lijuan Luo, and others
Scientific Reports 7, Article number: 7105 (2017)

Abstract

People vary considerably in moral reasoning. According to Kohlberg’s theory, individuals who reach the highest level of post-conventional moral reasoning judge moral issues based on deeper principles and shared ideals rather than self-interest or adherence to laws and rules. Recent research has suggested the involvement of the brain’s frontostriatal reward system in moral judgments and prosocial behaviors. However, it remains unknown whether moral reasoning level is associated with differences in reward system function. Here, we combined arterial spin labeling perfusion and blood oxygen level-dependent functional magnetic resonance imaging and measured frontostriatal reward system activity both at rest and during a sequential risky decision making task in a sample of 64 participants at different levels of moral reasoning. Compared to individuals at the pre-conventional and conventional level of moral reasoning, post-conventional individuals showed increased resting cerebral blood flow in the ventral striatum and ventromedial prefrontal cortex. Cerebral blood flow in these brain regions correlated with the degree of post-conventional thinking across groups. Post-conventional individuals also showed greater task-induced activation in the ventral striatum during risky decision making. These findings suggest that high-level post-conventional moral reasoning is associated with increased activity in the brain’s frontostriatal system, regardless of task-dependent or task-independent states.

The article is here.

Tuesday, October 24, 2017

'The deserving’: Moral reasoning and ideological dilemmas in public responses to humanitarian communications

Irene Bruna Seu
British Journal of Social Psychology 55 (4), pp. 739-755.

Abstract

This paper investigates everyday moral reasoning in relation to donations and prosocial behaviour in a humanitarian context. The discursive analysis focuses on the principles of deservingness which members of the public use to decide who to help and under what conditions.  The paper discusses three repertoires of deservingness: 'Seeing a difference', 'Waiting in queues' and 'Something for nothing ' to illustrate participants' dilemmatic reasoning and to examine how the position of 'being deserving' is negotiated in humanitarian crises.  Discursive analyses of these dilemmatic repertoires of deservingness identify the cultural and ideological resources behind these constructions and show how humanitarianism intersects and clashes with other ideologies and value systems.  The data suggest that a neoliberal ideology, which endorses self-gratification and materialistic and individualistic ethics, and cultural assimilation of helper and receiver play important roles in decisions about humanitarian helping. The paper argues for the need for psychological research to engage more actively with the dilemmas involved in the moral reasoning related to humanitarianism and to contextualize decisions about giving and helping within the socio-cultural and ideological landscape in which the helper operates.

The research is here.

Wednesday, October 4, 2017

Better Minds, Better Morals: A Procedural Guide to Better Judgment

G. Owen Schaefer and Julian Savulescu
Journal of Posthuman Studies
Vol. 1, No. 1, Journal of Posthuman Studies (2017), pp. 26-43

Abstract:

Making more moral decisions – an uncontroversial goal, if ever there was one. But how to go about it? In this article, we offer a practical guide on ways to promote good judgment in our personal and professional lives. We will do this not by outlining what the good life consists in or which values we should accept. Rather, we offer a theory of  procedural reliability : a set of dimensions of thought that are generally conducive to good moral reasoning. At the end of the day, we all have to decide for ourselves what is good and bad, right and wrong. The best way to ensure we make the right choices is to ensure the procedures we’re employing are sound and reliable. We identify four broad categories of judgment to be targeted – cognitive, self-management, motivational and interpersonal. Specific factors within each category are further delineated, with a total of 14 factors to be discussed. For each, we will go through the reasons it generally leads to more morally reliable decision-making, how various thinkers have historically addressed the topic, and the insights of recent research that can offer new ways to promote good reasoning. The result is a wide-ranging survey that contains practical advice on how to make better choices. Finally, we relate this to the project of transhumanism and prudential decision-making. We argue that transhumans will employ better moral procedures like these. We also argue that the same virtues will enable us to take better control of our own lives, enhancing our responsibility and enabling us to lead better lives from the prudential perspective.

A copy of the article is here.

Wednesday, September 13, 2017

Economics: Society Cannot Function Without Moral Bonds

Geoffrey Hodgson
Evonomics
Originally posted June 29, 2016

Here is an excerpt:

When mainstream economists began to question that individuals are entirely self-interested, their approach was to retain utility-maximization and preference functions, but to make them “other-regarding” so that some notion of altruism could be maintained. But such an individual is still self-serving, rather than being genuinely altruistic in a wider and more adequate sense. While “other regarding” he or she is still egotistically maximizing his or her own utility. As Deirdre McCloskey  put it, the economic agent is still Max U.

There is now an enormous body of empirical research confirming that humans have cooperative as well as self-interested dispositions. But many accounts conflate morality with altruism or cooperation. By contrast, Darwin established a distinctive and vital additional role for morality. Darwin’s argument counters the idea of unalloyed self-interest and the notion that morality can be reduced to a matter of utility or preference.

A widespread view among moral philosophers is that moral judgments cannot be treated as matters of mere preference or utility maximization. Morality means “doing the right thing.” It entails notions of justice that can over-ride our preferences or interests. Moral judgments are by their nature inescapable. They are buttressed by emotional feelings and reasoned argument. Morality differs fundamentally from matters of mere convenience, convention or conformism. Moral feelings are enhanced by learned cultural norms and rules. Morality is a group phenomenon involving deliberative, emotionally-driven and purportedly inescapable rules that apply to a community.

The article is here.

Thursday, May 18, 2017

Morality constrains the default representation of what is possible

Phillips J; Cushman F
Proc Natl Acad Sci U S A.  2017;  (ISSN: 1091-6490)

The capacity for representing and reasoning over sets of possibilities, or modal cognition, supports diverse kinds of high-level judgments: causal reasoning, moral judgment, language comprehension, and more. Prior research on modal cognition asks how humans explicitly and deliberatively reason about what is possible but has not investigated whether or how people have a default, implicit representation of which events are possible. We present three studies that characterize the role of implicit representations of possibility in cognition. Collectively, these studies differentiate explicit reasoning about possibilities from default implicit representations, demonstrate that human adults often default to treating immoral and irrational events as impossible, and provide a case study of high-level cognitive judgments relying on default implicit representations of possibility rather than explicit deliberation.

The paper is here.

Wednesday, March 8, 2017

The Moral Insignificance of Self-consciousness

Joshua Shepherd
European Journal of Philosophy
First published February 2, 2017

Abstract

In this paper, I examine the claim that self-consciousness is highly morally significant, such that the fact that an entity is self-conscious generates strong moral reasons against harming or killing that entity. This claim is apparently very intuitive, but I argue it is false. I consider two ways to defend this claim: one indirect, the other direct. The best-known arguments relevant to self-consciousness's significance take the indirect route. I examine them and argue that (a) in various ways they depend on unwarranted assumptions about self-consciousness's functional significance, and (b) once these assumptions are undermined, motivation for these arguments dissipates. I then consider the direct route to self-consciousness's significance, which depends on claims that self-consciousness has intrinsic value or final value. I argue what intrinsic or final value self-consciousness possesses is not enough to generate strong moral reasons against harming or killing.

The article is here.

Saturday, December 24, 2016

The Adaptive Utility of Deontology: Deontological Moral Decision-Making Fosters Perceptions of Trust and Likeability

Sacco, D.F., Brown, M., Lustgraaf, C.J.N. et al.
Evolutionary Psychological Science (2016).
doi:10.1007/s40806-016-0080-6

Abstract

Although various motives underlie moral decision-making, recent research suggests that deontological moral decision-making may have evolved, in part, to communicate trustworthiness to conspecifics, thereby facilitating cooperative relations. Specifically, social actors whose decisions are guided by deontological (relative to utilitarian) moral reasoning are judged as more trustworthy, are preferred more as social partners, and are trusted more in economic games. The current study extends this research by using an alternative manipulation of moral decision-making as well as the inclusion of target facial identities to explore the potential role of participant and target sex in reactions to moral decisions. Participants viewed a series of male and female targets, half of whom were manipulated to either have responded to five moral dilemmas consistent with an underlying deontological motive or utilitarian motive; participants indicated their liking and trust toward each target. Consistent with previous research, participants liked and trusted targets whose decisions were consistent with deontological motives more than targets whose decisions were more consistent with utilitarian motives; this effect was stronger for perceptions of trust. Additionally, women reported greater dislike for targets whose decisions were consistent with utilitarianism than men. Results suggest that deontological moral reasoning evolved, in part, to facilitate positive relations among conspecifics and aid group living and that women may be particularly sensitive to the implications of the various motives underlying moral decision-making.

The research is here.

Editor's Note: This research may apply to psychotherapy, leadership style, and politics.