Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Conflict. Show all posts
Showing posts with label Conflict. Show all posts

Wednesday, September 23, 2015

Microaggression and Changing Moral Cultures

By Bradley Campbell and Jason Manning
The Chronicle of Higher Education
Originally posted July 9, 2015

Here is an excerpt:

We can better understand complaints about microaggression and the reactions to them if we understand that each side of the debate draws from a different moral culture. Those calling attention to microaggressions have rejected the morality dominant among middle-class Americans during the 20th century — what sociologists and historians have sometimes called a dignity culture, which abhors private vengeance and encourages people to go to the police or use the courts when they are seriously harmed. Less serious offenses might be ignored, and certainly any merely verbal offense should be. Parents thus teach their children to say, "Sticks and stones may break my bones, but words can never hurt me."

Microaggression complaints make clear that this is no longer settled morality. Those who see microaggressions as a serious problem and who bring up minor and unintentional slights reject the idea that words can’t hurt, that slights should be brushed off, that even overt insults should be ignored. This attitude reveals the emergence of a new moral culture, one we call victimhood culture, since it valorizes victimhood.

Microaggression complaints are just one manifestation; from the same circles of campus activists also come calls for trigger warnings to alert sensitive students to course material that might disturb them, and the creation of "safe spaces" to shield students from offensive ideas.

The entire blog post is here.

Friday, July 31, 2015

Can Moral Disputes Be Resolved?

By Alex Rosenberg
The New York Times - Opinion
Originally published July 13, 2015

Here is an excerpt:

The notion that moral judgments are not just true or false claims about human conduct helps explain the failure of ethical theories as far back as Aristotle’s. These theories started out on the wrong foot, by treating morality and immorality as intrinsic to the actions themselves, instead of our responses to them.

Factoring human emotions into moral judgment explains much about them. Why they are held so strongly, why different cultures that shape human emotional responses have such different moral norms, even why people treat abstract ethical disagreement by others as a moral flaw. And most of all, this meta-ethical theory helps us understand why such disputes are sometimes intractable.

Meta-ethics has begun to make use of findings in cognitive social psychology, and in neuroscience, to help understand the nature of ethical claims.

The entire article is here.

Saturday, July 25, 2015

Economic Games and Social Neuroscience Methods Can Help Elucidate The Psychology of Parochial Altruism

Everett Jim A.C., Faber Nadira S., Crockett Molly J, De Dreu Carsten K W
Opinion Article
Front. Psychol. | doi: 10.3389/fpsyg.2015.00861

The success of Homo sapiens can in large part be attributed to their highly social nature, and particularly their ability to live and work together in extended social groups. Throughout history, humans have undergone sacrifices to both advance and defend the interests of fellow group members against non-group members. Intrigued by this, researchers from multiple disciplines have attempted to explain the psychological origins and processes of parochial altruism: the well-documented tendency for increased cooperation and prosocial behavior within the boundaries of a group (akin to ingroup love, and ingroup favoritism), and second, the propensity to reject, derogate, and even harm outgroup members (akin to ‘outgroup hate’, e.g. Brewer, 1999; Choi & Bowles, 2007; De Dreu, Balliet, & Halevy, 2014, Hewstone, Rubin, & Willis, 2002; Rusch, 2014; Tajfel & Turner, 1979). Befitting its centrality to a wide range of human social endeavors, parochial altruism is manifested in a large variety of contexts that may differ psychologically. Sometimes, group members help others to achieve a positive outcome (e.g. gain money); and sometimes group members help others avoid a negative outcome (e.g. avoid being robbed). Sometimes, group members conflict over a new resource (e.g. status; money; land) that is currently ‘unclaimed’; and sometimes they conflict over a resource that is already held by one group.

The entire article is here.

Monday, February 23, 2015

Parents who wish no further treatment for their child

By M.A. de Vos, A.A. Seeber, S.K.M. Gevers, A.P. Bos, F. Gevers, and D.L. Williams
J Med Ethics 2015;41:195-200 doi:10.1136/medethics-2013-101395

Abstract

Background

In the ethical and clinical literature, cases of parents who want treatment for their child to be withdrawn against the views of the medical team have not received much attention. Yet resolution of such conflicts demands much effort of both the medical team and parents.

Objective

To discuss who can best protect a child's interests, which often becomes a central issue, putting considerable pressure on mutual trust and partnership.

Methods

We describe the case of a 3-year-old boy with acquired brain damage due to autoimmune-mediated encephalitis whose parents wanted to stop treatment. By comparing this case with relevant literature, we systematically explored the pros and cons of sharing end-of-life decisions with parents in cases where treatment is considered futile by parents and not (yet) by physicians.

Conclusions

Sharing end-of-life decisions with parents is a more important duty for physicians than protecting parents from guilt or doubt. Moreover, a request from parents on behalf of their child to discontinue treatment is, and should be, hard to over-rule in cases with significant prognostic uncertainty and/or in cases with divergent opinions within the medical team.

The entire article is here.

Saturday, December 6, 2014

Denying Problems When We Don’t Like the Solutions

By Duke University
Press Release
Originally published November 6, 2014

Here is an excerpt:

A new study from Duke University finds that people will evaluate scientific evidence based on whether they view its policy implications as politically desirable. If they don't, then they tend to deny the problem even exists.

“Logically, the proposed solution to a problem, such as an increase in government regulation or an extension of the free market, should not influence one’s belief in the problem. However, we find it does,” said co-author Troy Campbell, a Ph.D. candidate at Duke's Fuqua School of Business. “The cure can be more immediately threatening than the problem.”

The study, "Solution Aversion: On the Relation Between Ideology and Motivated Disbelief," appears in the November issue of the Journal of Personality and Social Psychology (viewable here).

The entire article is here.

Thursday, July 17, 2014

Moral Dilemmas

The Stanford Encyclopedia of Philosophy
Revised June 30, 2014

Here is an excerpt:

What is common to the two well-known cases is conflict. In each case, an agent regards herself as having moral reasons to do each of two actions, but doing both actions is not possible. Ethicists have called situations like these moral dilemmas. The crucial features of a moral dilemma are these: the agent is required to do each of two (or more) actions; the agent can do each of the actions; but the agent cannot do both (or all) of the actions. The agent thus seems condemned to moral failure; no matter what she does, she will do something wrong (or fail to do something that she ought to do).

The Platonic case strikes many as too easy to be characterized as a genuine moral dilemma. For the agent's solution in that case is clear; it is more important to protect people from harm than to return a borrowed weapon. And in any case, the borrowed item can be returned later, when the owner no longer poses a threat to others. Thus in this case we can say that the requirement to protect others from serious harm overrides the requirement to repay one's debts by returning a borrowed item when its owner so demands. When one of the conflicting requirements overrides the other, we do not have a genuine moral dilemma. So in addition to the features mentioned above, in order to have a genuine moral dilemma it must also be true that neither of the conflicting requirements is overridden (Sinnott-Armstrong 1988, Chapter 1).

The entire page is here.

Editor's note: Anyone interested in ethics and morality needs to read this page.  It is an excellent source to understand moral dilemmas as well as ethical dilemmas when in the role of a psychologist.

Thursday, January 16, 2014

The Tragedy of Common-Sense Morality

Evolution didn’t equip us for modern judgments.

By Tiffany O'Callaghan
The New Scientist
Originally published December 14, 2013

Our instincts don't always serve us well. Moral psychologist Joshua Greene explains why, in the modern world, we need to figure out when to put our sense of right and wrong in manual mode. His new book is Moral Tribe: Emotion, Reason, and the Gap Between Us and Them.

Tiffany O’Callaghan: You say morality is more than it evolved to be. What do you mean?

Joshua Greene: Morality is essentially a suite of psychological mechanisms that enable us to cooperate. But, biologically at least, we only evolved to cooperate in a tribal way. Individuals who were more moral—more cooperative with those around them—could outcompete others who were not. However, we have the capacity to take a step back from this and ask what a more global morality would look like. Why are the lives of people on the other side of the world worth any less than those in my immediate community? Going through that reasoning process can allow our moral thinking to do something it never evolved to.

TO: So we need to be able to switch from intuitive morality to more considered responses? When should we use which system?

JG: When it’s a matter of me versus us, my interests versus those of others, our instincts do pretty well. They don't do as well when it’s us versus them, my group’s interests and values versus another group’s. Our moral intuitions didn’t evolve to solve that problem in an even-handed way. When groups disagree about the right thing to do, we need to slow down and shift into manual mode.

The entire article is here.

Monday, November 11, 2013

Why Can't We All Just Get Along? The Uncertain Biological Basis of Morality

By Robert Wright
The Atlantic
November 2013

The article is really a review of several books.  However, it is not a formal book review, but compares and contrasts efforts by those studying morality, psychology, and biology.  Here are some excerpts:


The well-documented human knack for bigotry, conflict, and atrocity must have something to do with the human mind, and relevant parts of the mind are indeed coming into focus—not just thanks to the revolution in brain scanning, or even advances in neuroscience more broadly, but also thanks to clever psychology experiments and a clearer understanding of the evolutionary forces that shaped human nature. Maybe we’re approaching a point where we can actually harness this knowledge, make radical progress in how we treat one another, and become a species worthy of the title Homo sapiens.

(cut)

...the impulses and inclinations that shape moral discourse are, by and large, legacies of natural selection, rooted in our genes. Specifically, many of them are with us today because they helped our ancestors realize the benefits of cooperation. As a result, people are pretty good at getting along with one another, and at supporting the basic ethical rules that keep societies humming.

(cut)

When you combine judgment that’s naturally biased with the belief that wrongdoers deserve to suffer, you wind up with situations like two people sharing the conviction that the other one deserves to suffer. Or two groups sharing that conviction. And the rest is history. Rwanda’s Hutus and Tutsis, thanks to their common humanity, shared the intuition that bad people should suffer; they just disagreed—thanks to their common humanity—about which group was bad.

The entire article is here.