Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Reasoning. Show all posts
Showing posts with label Reasoning. Show all posts

Wednesday, October 3, 2018

Moral Reasoning

Richardson, Henry S.
The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.)

Here are two brief excerpts:

Moral considerations often conflict with one another. So do moral principles and moral commitments. Assuming that filial loyalty and patriotism are moral considerations, then Sartre’s student faces a moral conflict. Recall that it is one thing to model the metaphysics of morality or the truth conditions of moral statements and another to give an account of moral reasoning. In now looking at conflicting considerations, our interest here remains with the latter and not the former. Our principal interest is in ways that we need to structure or think about conflicting considerations in order to negotiate well our reasoning involving them.

(cut)

Understanding the notion of one duty overriding another in this way puts us in a position to take up the topic of moral dilemmas. Since this topic is covered in a separate article, here we may simply take up one attractive definition of a moral dilemma. Sinnott-Armstrong (1988) suggested that a moral dilemma is a situation in which the following are true of a single agent:

  1. He ought to do A.
  2. He ought to do B.
  3. He cannot do both A and B.
  4. (1) does not override (2) and (2) does not override (1).

This way of defining moral dilemmas distinguishes them from the kind of moral conflict, such as Ross’s promise-keeping/accident-prevention case, in which one of the duties is overridden by the other. Arguably, Sartre’s student faces a moral dilemma. Making sense of a situation in which neither of two duties overrides the other is easier if deliberative commensurability is denied. Whether moral dilemmas are possible will depend crucially on whether “ought” implies “can” and whether any pair of duties such as those comprised by (1) and (2) implies a single, “agglomerated” duty that the agent do both A and B. If either of these purported principles of the logic of duties is false, then moral dilemmas are possible.

The entry is here.

Monday, August 6, 2018

Why Should We Be Good?

Matt McManus
Quillette.com
Originally posted July 7, 2018

Here are two excerpts:

The negative motivation arises from moral dogmatism. There are those who wish to dogmatically assert their own values without worrying that they may not be as universal as one might suppose. For instance, this is often the case with religious fundamentalists who worry that secular society is increasingly unmoored from proper values and traditions. Ironically, the dark underside of this moral dogmatism is often a relativistic epistemology. Ethical dogmatists do not want to be confronted with the possibility that it is possible to challenge their values because they often cannot provide good reasons to back them up.

(cut)

These issues are all of considerable philosophical interest. In what follows, I want to press on just one issue that is often missed in debates between those who believe there are universal values, and those who believe that what is ethically correct is relative to either a culture or to the subjective preference of individuals. The issue I wish to explore is this: even if we know which values are universal, why should we feel compelled to adhere to them? Put more simply, even if we know what it is to be good, why should we bother to be good? This is one of the major questions addressed by what is often called meta-ethics.

The information is here.

Tuesday, June 26, 2018

Understanding unconscious bias

The Royal Society
Originally published November 17, 2015

This animation introduces the key concepts of unconscious bias.  It forms part of the Royal Society’s efforts to ensure that all those who serve on Royal Society selection and appointment panels are aware of differences in how candidates may present themselves, how to recognise bias in yourself and others, how to recognise inappropriate advocacy or unreasoned judgement. You can find out more about unconscious bias and download a briefing which includes current academic research at www.royalsociety.org/diversity.



A great three-minute video.

Sunday, June 24, 2018

Moral hindsight for good actions and the effects of imagined alternatives to reality

Ruth M.J. Byrne and Shane Timmons
Cognition
Volume 178, September 2018, Pages 82–91

Abstract

Five experiments identify an asymmetric moral hindsight effect for judgments about whether a morally good action should have been taken, e.g., Ann should run into traffic to save Jill who fell before an oncoming truck. Judgments are increased when the outcome is good (Jill sustained minor bruises), as Experiment 1 shows; but they are not decreased when the outcome is bad (Jill sustained life-threatening injuries), as Experiment 2 shows. The hindsight effect is modified by imagined alternatives to the outcome: judgments are amplified by a counterfactual that if the good action had not been taken, the outcome would have been worse, and diminished by a semi-factual that if the good action had not been taken, the outcome would have been the same. Hindsight modification occurs when the alternative is presented with the outcome, and also when participants have already committed to a judgment based on the outcome, as Experiments 3A and 3B show. The hindsight effect occurs not only for judgments in life-and-death situations but also in other domains such as sports, as Experiment 4 shows. The results are consistent with a causal-inference explanation of moral judgment and go against an aversive-emotion one.

Highlights
• Judgments a morally good action should be taken are increased when it succeeds.
• Judgments a morally good action should be taken are not decreased when it fails.
• Counterfactuals that the outcome would have been worse amplify judgments.
• Semi-factuals that the outcome would have been the same diminish judgments.
• The asymmetric moral hindsight effect supports a causal-inference theory.

The research is here.

Thursday, May 24, 2018

Is there a universal morality?

Massimo Pigliucci
The Evolution Institute
Originally posted March 2018

Here is the conclusion:

The first bit means that we are all deeply inter-dependent on other people. Despite the fashionable nonsense, especially in the United States, about “self-made men” (they are usually men), there actually is no such thing. Without social bonds and support our lives would be, as Thomas Hobbes famously put it, poor, nasty, brutish, and short. The second bit, the one about intelligence, does not mean that we always, or even often, act rationally. Only that we have the capability to do so. Ethics, then, especially (but not only) for the Stoics becomes a matter of “living according to nature,” meaning not to endorse whatever is natural (that’s an elementary logical fallacy), but rather to take seriously the two pillars of human nature: sociality and reason. As Marcus Aurelius put it, “Do what is necessary, and whatever the reason of a social animal naturally requires, and as it requires.” (Meditations, IV.24)

There is something, of course, the ancients did get wrong: they, especially Aristotle, thought that human nature was the result of a teleological process, that everything has a proper function, determined by the very nature of the cosmos. We don’t believe that anymore, not after Copernicus and especially Darwin. But we do know that human beings are indeed a particular product of complex and ongoing evolutionary processes. These processes do not determine a human essence, but they do shape a statistical cluster of characters that define what it means to be human. That cluster, in turn, constrains — without determining — what sort of behaviors are pro-social and lead to human flourishing, and what sort of behaviors don’t. And ethics is the empirically informed philosophical enterprise that attempts to understand and articulate that distinction.

The information is here.

Wednesday, May 16, 2018

Moral Fatigue: The Effects of Cognitive Fatigue on Moral Reasoning

S. Timmons and R. Byrne
Quarterly Journal of Experimental Psychology (March 2018)

Abstract

We report two experiments that show a moral fatigue effect: participants who are fatigued after they have carried out a tiring cognitive task make different moral judgments compared to participants who are not fatigued. Fatigued participants tend to judge that a moral violation is less permissible even though it would have a beneficial effect, such as killing one person to save the lives of five others. The moral fatigue effect occurs when people make a judgment that focuses on the harmful action, killing one person, but not when they make a judgment that focuses on the beneficial outcome, saving the lives of others, as shown in Experiment 1 (n = 196). It also occurs for judgments about morally good actions, such as jumping onto railway tracks to save a person who has fallen there, as shown in Experiment 2 (n = 187). The results have implications for alternative explanations of moral reasoning.

The research is here.

Thursday, May 3, 2018

Why Pure Reason Won’t End American Tribalism

Robert Wright
www.wired.com
Originally published April 9, 2018

Here is an excerpt:

Pinker also understands that cognitive biases can be activated by tribalism. “We all identify with particular tribes or subcultures,” he notes—and we’re all drawn to opinions that are favored by the tribe.

So far so good: These insights would seem to prepare the ground for a trenchant analysis of what ails the world—certainly including what ails an America now famously beset by political polarization, by ideological warfare that seems less and less metaphorical.

But Pinker’s treatment of the psychology of tribalism falls short, and it does so in a surprising way. He pays almost no attention to one of the first things that springs to mind when you hear the word “tribalism.” Namely: People in opposing tribes don’t like each other. More than Pinker seems to realize, the fact of tribal antagonism challenges his sunny view of the future and calls into question his prescriptions for dispelling some of the clouds he does see on the horizon.

I’m not talking about the obvious downside of tribal antagonism—the way it leads nations to go to war or dissolve in civil strife, the way it fosters conflict along ethnic or religious lines. I do think this form of antagonism is a bigger problem for Pinker’s thesis than he realizes, but that’s a story for another day. For now the point is that tribal antagonism also poses a subtler challenge to his thesis. Namely, it shapes and drives some of the cognitive distortions that muddy our thinking about critical issues; it warps reason.

The article is here.

Wednesday, April 18, 2018

Why it’s a bad idea to break the rules, even if it’s for a good cause

Robert Wiblin
80000hours.org
Originally posted March 20, 2018

How honest should we be? How helpful? How friendly? If our society claims to value honesty, for instance, but in reality accepts an awful lot of lying – should we go along with those lax standards? Or, should we attempt to set a new norm for ourselves?

Dr Stefan Schubert, a researcher at the Social Behaviour and Ethics Lab at Oxford University, has been modelling this in the context of the effective altruism community. He thinks people trying to improve the world should hold themselves to very high standards of integrity, because their minor sins can impose major costs on the thousands of others who share their goals.

In addition, when a norm is uniquely important to our situation, we should be willing to question society and come up with something different and hopefully better.

But in other cases, we can be better off sticking with whatever our culture expects, both to save time, avoid making mistakes, and ensure others can predict our behaviour.

The key points and podcast are here.

Sunday, January 7, 2018

Are human rights anything more than legal conventions?

John Tasioulas
aeon.co
Originally published April 11, 2017

We live in an age of human rights. The language of human rights has become ubiquitous, a lingua franca used for expressing the most basic demands of justice. Some are old demands, such as the prohibition of torture and slavery. Others are newer, such as claims to internet access or same-sex marriage. But what are human rights, and where do they come from? This question is made urgent by a disquieting thought. Perhaps people with clashing values and convictions can so easily appeal to ‘human rights’ only because, ultimately, they don’t agree on what they are talking about? Maybe the apparently widespread consensus on the significance of human rights depends on the emptiness of that very notion? If this is true, then talk of human rights is rhetorical window-dressing, masking deeper ethical and political divisions.

Philosophers have debated the nature of human rights since at least the 12th century, often under the name of ‘natural rights’. These natural rights were supposed to be possessed by everyone and discoverable with the aid of our ordinary powers of reason (our ‘natural reason’), as opposed to rights established by law or disclosed through divine revelation. Wherever there are philosophers, however, there is disagreement. Belief in human rights left open how we go about making the case for them – are they, for example, protections of human needs generally or only of freedom of choice? There were also disagreements about the correct list of human rights – should it include socio-economic rights, like the rights to health or work, in addition to civil and political rights, such as the rights to a fair trial and political participation?

The article is here.

Tuesday, October 10, 2017

Reasons Probably Won’t Change Your Mind: The Role of Reasons in Revising Moral Decisions

Stanley, M. L., Dougherty, A. M., Yang, B. W., Henne, P., & De Brigard, F. (2017).
Journal of Experimental Psychology: General. Advance online publication.

Abstract

Although many philosophers argue that making and revising moral decisions ought to be a matter of deliberating over reasons, the extent to which the consideration of reasons informs people’s moral decisions and prompts them to change their decisions remains unclear. Here, after making an initial decision in 2-option moral dilemmas, participants examined reasons for only the option initially chosen (affirming reasons), reasons for only the option not initially chosen (opposing reasons), or reasons for both options. Although participants were more likely to change their initial decisions when presented with only opposing reasons compared with only affirming reasons, these effect sizes were consistently small. After evaluating reasons, participants were significantly more likely not to change their initial decisions than to change them, regardless of the set of reasons they considered. The initial decision accounted for most of the variance in predicting the final decision, whereas the reasons evaluated accounted for a relatively small proportion of the variance in predicting the final decision. This resistance to changing moral decisions is at least partly attributable to a biased, motivated evaluation of the available reasons: participants rated the reasons supporting their initial decisions more favorably than the reasons opposing their initial decisions, regardless of the reported strategy used to make the initial decision. Overall, our results suggest that the consideration of reasons rarely induces people to change their initial decisions in moral dilemmas.

The article is here, behind a paywall.

You can contact the lead investigator for a personal copy.

Wednesday, October 4, 2017

Better Minds, Better Morals: A Procedural Guide to Better Judgment

G. Owen Schaefer and Julian Savulescu
Journal of Posthuman Studies
Vol. 1, No. 1, Journal of Posthuman Studies (2017), pp. 26-43

Abstract:

Making more moral decisions – an uncontroversial goal, if ever there was one. But how to go about it? In this article, we offer a practical guide on ways to promote good judgment in our personal and professional lives. We will do this not by outlining what the good life consists in or which values we should accept. Rather, we offer a theory of  procedural reliability : a set of dimensions of thought that are generally conducive to good moral reasoning. At the end of the day, we all have to decide for ourselves what is good and bad, right and wrong. The best way to ensure we make the right choices is to ensure the procedures we’re employing are sound and reliable. We identify four broad categories of judgment to be targeted – cognitive, self-management, motivational and interpersonal. Specific factors within each category are further delineated, with a total of 14 factors to be discussed. For each, we will go through the reasons it generally leads to more morally reliable decision-making, how various thinkers have historically addressed the topic, and the insights of recent research that can offer new ways to promote good reasoning. The result is a wide-ranging survey that contains practical advice on how to make better choices. Finally, we relate this to the project of transhumanism and prudential decision-making. We argue that transhumans will employ better moral procedures like these. We also argue that the same virtues will enable us to take better control of our own lives, enhancing our responsibility and enabling us to lead better lives from the prudential perspective.

A copy of the article is here.

Saturday, September 16, 2017

How to Distinguish Between Antifa, White Supremacists, and Black Lives Matter

Conor Friedersdorf
The Atlantic
Originally published August 31, 2017

Here are two excerpts:

One can condemn the means of extralegal violence, and observe that the alt-right, Antifa, and the far-left have all engaged in it on different occasions, without asserting that all extralegal violence is equivalent––murdering someone with a car or shooting a representative is more objectionable than punching with the intent to mildly injure. What’s more, different groups can choose equally objectionable means without becoming equivalent, because assessing any group requires analyzing their ends, not just their means.

For neo-Nazis and Klansmen in Charlottesville, one means, a torch-lit parade meant to intimidate by evoking bygone days of racial terrorism, was deeply objectionable; more importantly, their end, spreading white-supremacist ideology in service of a future where racists can lord power over Jews and people of color, is abhorrent.

Antifa is more complicated.

Some of its members employ the objectionable means of initiating extralegal street violence; but its stated end of resisting fascism is laudable, while its actual end is contested. Is it really just about resisting fascists or does it have a greater, less defensible agenda? Many debates about Antifa that play out on social media would prove less divisive if the parties understood themselves to be agreeing that opposing fascism is laudable while disagreeing about Antifa’s means, or whether its end is really that limited.

(cut)

A dearth of distinctions has a lot of complicated consequences, but in aggregate, it helps to empower the worst elements in a society, because those elements are unable to attract broad support except by muddying distinctions between themselves and others whose means or ends are defensible to a broader swath of the public. So come to whatever conclusions accord with your reason and conscience. But when expressing them, consider drawing as many distinctions as possible.

The article is here.

Wednesday, May 17, 2017

Moral conformity in online interactions

Meagan Kelly, Lawrence Ngo, Vladimir Chituc, Scott Huettel, and Walter Sinnott-Armstrong
Social Influence 

Abstract

Over the last decade, social media has increasingly been used as a platform for political and moral discourse. We investigate whether conformity, specifically concerning moral attitudes, occurs in these virtual environments apart from face-to-face interactions. Participants took an online survey and saw either statistical information about the frequency of certain responses, as one might see on social media (Study 1), or arguments that defend the responses in either a rational or emotional way (Study 2). Our results show that social information shaped moral judgments, even in an impersonal digital setting. Furthermore, rational arguments were more effective at eliciting conformity than emotional arguments. We discuss the implications of these results for theories of moral judgment that prioritize emotional responses.

The article is here.

Wednesday, April 26, 2017

Living a lie: We deceive ourselves to better deceive others

Matthew Hutson
Scientific American
Originally posted April 8, 2017

People mislead themselves all day long. We tell ourselves we’re smarter and better looking than our friends, that our political party can do no wrong, that we’re too busy to help a colleague. In 1976, in the foreword to Richard Dawkins’s “The Selfish Gene,” the biologist Robert Trivers floated a novel explanation for such self-serving biases: We dupe ourselves in order to deceive others, creating social advantage. Now after four decades Trivers and his colleagues have published the first research supporting his idea.

Psychologists have identified several ways of fooling ourselves: biased information-gathering, biased reasoning and biased recollections. The new work, forthcoming in the Journal of Economic Psychology, focuses on the first — the way we seek information that supports what we want to believe and avoid that which does not.

The article is here.

Tuesday, April 4, 2017

Illusions in Reasoning

Sangeet S. Khemlani & P. N. Johnson-Laird
Minds & Machines
DOI 10.1007/s11023-017-9421-x

Abstract

Some philosophers argue that the principles of human reasoning are and that mistakes are no more than momentary lapses in ‘‘information processing."  This article makes a case to the contrary. It shows that human reasoners systematic fallacies. The theory of mental models predicts these
errors. It postulates that individuals construct mental models of the possibilities to the premises of an inference refer. But, their models usually represent what is in a possibility, not what is false. This procedure reduces the load on working and for the most part it yields valid inferences. However, as a computer implementing the theory revealed, it leads to fallacious conclusions for inferences—those for which it is crucial to represent what is false in a possibility.  Experiments demonstrate the variety of these fallacies and contrast them control problems, which reasoners tend to get right. The fallacies can be illusions, and they occur in reasoning based on sentential connectives as ‘‘if’’ and ‘‘or’’, quantifiers such as ‘‘all the artists’’ and ‘‘some of the artists’’, deontic relations such as ‘‘permitted’’ and ‘‘obligated’’, and causal relations such causes’’ and ‘‘allows’’. After we have reviewed the principal results, we consider potential for alternative accounts to explain these illusory inferences. And show how the illusions illuminate the nature of human rationality.

Find it here.

Wednesday, February 22, 2017

Moralized Rationality: Relying on Logic and Evidence in the Formation and Evaluation of Belief Can Be Seen as a Moral Issue

Ståhl T, Zaal MP, Skitka LJ (2016)
PLoS ONE 11(11): e0166332. doi:10.1371/journal.pone.0166332

Abstract

In the present article we demonstrate stable individual differences in the extent to which a reliance on logic and evidence in the formation and evaluation of beliefs is perceived as a moral virtue, and a reliance on less rational processes is perceived as a vice. We refer to this individual difference variable as moralized rationality. Eight studies are reported in which an instrument to measure individual differences in moralized rationality is validated. Results show that the Moralized Rationality Scale (MRS) is internally consistent, and captures something distinct from the personal importance people attach to being rational (Studies 1–3). Furthermore, the MRS has high test-retest reliability (Study 4), is conceptually distinct from frequently used measures of individual differences in moral values, and it is negatively related to common beliefs that are not supported by scientific evidence (Study 5). We further demonstrate that the MRS predicts morally laden reactions, such as a desire for punishment, of people who rely on irrational (vs. rational) ways of forming and evaluating beliefs (Studies 6 and 7). Finally, we show that the MRS uniquely predicts motivation to contribute to a charity that works to prevent the spread of irrational beliefs (Study 8). We conclude that (1) there are stable individual differences in the extent to which people moralize a reliance on rationality in the formation and evaluation of beliefs, (2) that these individual differences do not reduce to the personal importance attached to rationality, and (3) that individual differences in moralized rationality have important motivational and interpersonal consequences.

The article is here.

Friday, October 28, 2016

How Large Is the Role of Emotion in Judgments of Moral Dilemmas?

Zachary Horne and Derek Powell
PLoS ONE
Originally published: July 6, 2016

Abstract

Moral dilemmas often pose dramatic and gut-wrenching emotional choices. It is now widely accepted that emotions are not simply experienced alongside people’s judgments about moral dilemmas, but that our affective processes play a central role in determining those judgments. However, much of the evidence purporting to demonstrate the connection between people’s emotional responses and their judgments about moral dilemmas has recently been called into question. In the present studies, we reexamined the role of emotion in people’s judgments about moral dilemmas using a validated self-report measure of emotion. We measured participants’ specific emotional responses to moral dilemmas and, although we found that moral dilemmas evoked strong emotional responses, we found that these responses were only weakly correlated with participants’ moral judgments. We argue that the purportedly strong connection between emotion and judgments of moral dilemmas may have been overestimated.

The article is here.

Saturday, August 20, 2016

The Selective Laziness of Reasoning

Emmanuel Trouche, Petter Johansson, Lars Hall, Hugo Mercier
Cognitive Science
First published: 9 October 2015

Abstract

Reasoning research suggests that people use more stringent criteria when they evaluate others' arguments than when they produce arguments themselves. To demonstrate this “selective laziness,” we used a choice blindness manipulation. In two experiments, participants had to produce a series of arguments in response to reasoning problems, and they were then asked to evaluate other people's arguments about the same problems. Unknown to the participants, in one of the trials, they were presented with their own argument as if it was someone else's. Among those participants who accepted the manipulation and thus thought they were evaluating someone else's argument, more than half (56% and 58%) rejected the arguments that were in fact their own. Moreover, participants were more likely to reject their own arguments for invalid than for valid answers. This demonstrates that people are more critical of other people's arguments than of their own, without being overly critical: They are better able to tell valid from invalid arguments when the arguments are someone else's rather than their own.

The article is here.

Saturday, January 9, 2016

Moral judgment as information processing: an integrative review

Steve Guglielmo
Front Psychol. 2015; 6: 1637.
Published online 2015 Oct 30. doi:  10.3389/fpsyg.2015.01637

Abstract

How do humans make moral judgments about others’ behavior? This article reviews dominant models of moral judgment, organizing them within an overarching framework of information processing. This framework poses two distinct questions: (1) What input information guides moral judgments? and (2) What psychological processes generate these judgments? Information Models address the first question, identifying critical information elements (including causality, intentionality, and mental states) that shape moral judgments. A subclass of Biased Information Models holds that perceptions of these information elements are themselves driven by prior moral judgments. Processing Models address the second question, and existing models have focused on the relative contribution of intuitive versus deliberative processes. This review organizes existing moral judgment models within this framework and critically evaluates them on empirical and theoretical grounds; it then outlines a general integrative model grounded in information processing, and concludes with conceptual and methodological suggestions for future research. The information-processing framework provides a useful theoretical lens through which to organize extant and future work in the rapidly growing field of moral judgment.

The entire article is here.