Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Dilemmas. Show all posts
Showing posts with label Dilemmas. Show all posts

Wednesday, December 20, 2017

Americans have always been divided over morality, politics and religion

Andrew Fiala
The Fresno Bee
Originally published December 1, 2017

Our country seems more divided than ever. Recent polls from the Pew Center and the Washington Post make this clear. The Post concludes that seven in 10 Americans say we have “reached a dangerous low point” of divisiveness. A significant majority of Americans think our divisions are as bad as they were during the Vietnam War.

But let’s be honest, we have always been divided. Free people always disagree about morality, politics and religion. We disagree about abortion, euthanasia, gay marriage, drug legalization, pornography, the death penalty and a host of other issues. We also disagree about taxation, inequality, government regulation, race, poverty, immigration, national security, environmental protection, gun control and so on.

Beneath our moral and political disagreements are deep religious differences. Atheists want religious superstitions to die out. Theists think we need God’s guidance. And religious people disagree among themselves about God, morality and politics.

The post is here.

Friday, November 17, 2017

Going with your gut may mean harsher moral judgments

Jeff Sossamon
www.futurity.org
Originally posted November 2, 2017

Going with your intuition could make you judge others’ moral transgressions more harshly and keep you from changing your mind, even after considering all the facts, a new study suggests.

The findings show that people who strongly rely on intuition automatically condemn actions they perceive to be morally wrong, even if there is no actual harm.

In psychology, intuition, or “gut instinct,” is defined as the ability to understand something immediately, without the need for reasoning.

“It is now widely acknowledged that intuitive processing influences moral judgment,” says Sarah Ward, a doctoral candidate in social and personality psychology at the University of Missouri.

“We thought people who were more likely to trust their intuition would be more likely to condemn things that are shocking, whereas people who don’t rely on gut feelings would not condemn these same actions as strongly,” Ward says.

Ward and Laura King, professor of psychological sciences, had study participants read through a series of scenarios and judge whether the action was wrong, such as an individual giving a gift to a partner that had previously been purchased for an ex.

The article is here.

Friday, November 3, 2017

A fundamental problem with Moral Enhancement

Joao Fabiano
Practical Ethics
Originally posted October 13, 2017

Moral philosophers often prefer to conceive thought experiments, dilemmas and problem cases of single individuals who make one-shot decisions with well-defined short-term consequences. Morality is complex enough that such simplifications seem justifiable or even necessary for philosophical reflection.  If we are still far from consensus on which is the best moral theory or what makes actions right or wrong – or even if such aspects should be the central problem of moral philosophy – by considering simplified toy scenarios, then introducing group or long-term effects would make matters significantly worse. However, when it comes to actually changing human moral dispositions with the use of technology (i.e., moral enhancement), ignoring the essential fact that morality deals with group behaviour with long-ranging consequences can be extremely risky. Despite those risks, attempting to provide a full account of morality in order to conduct moral enhancement would be both simply impractical as well as arguably risky. We seem to be far away from such account, yet there are pressing current moral failings, such as the inability for proper large-scale cooperation, which makes the solution to present global catastrophic risks, such as global warming or nuclear war, next to impossible. Sitting back and waiting for a complete theory of morality might be riskier than attempting to fix our moral failing using incomplete theories. We must, nevertheless, proceed with caution and an awareness of such incompleteness. Here I will present several severe risks from moral enhancement that arise from focusing on improving individual dispositions while ignoring emergent societal effects and point to tentative solutions to those risks. I deem those emergent risks fundamental problems both because they lie at the foundation of the theoretical framework guiding moral enhancement – moral philosophy – and because they seem, at the time, inescapable; my proposed solution will aim at increasing awareness of such problems instead of directly solving them.

The article is here.

Wednesday, September 20, 2017

Companies should treat cybersecurity as a matter of ethics

Thomas Lee
The San Francisco Chronicle
Originally posted September 2, 2017

Here is an excerpt:

An ethical code will force companies to rethink how they approach research and development. Instead of making stuff first and then worrying about data security later, companies will start from the premise that they need to protect consumer privacy before they start designing new products and services, Harkins said.

There is precedent for this. Many professional organizations like the American Medical Association and American Bar Association require members to follow a code of ethics. For example, doctors must pledge above all else not to harm a patient.

A code of ethics for cybersecurity will no doubt slow the pace of innovation, said Maurice Schweitzer, a professor of operations, information and decisions at the University of Pennsylvania’s Wharton School.

Ultimately, though, following such a code could boost companies’ reputations, Schweitzer said. Given the increasing number and severity of hacks, consumers will pay a premium for companies dedicated to security and privacy from the get-go, he said.

In any case, what’s wrong with taking a pause so we can catch our breath? The ethical quandaries technology poses to mankind are only going to get more complex as we increasingly outsource our lives to thinking machines.

That’s why a code of ethics is so important. Technology may come and go, but right and wrong never changes.

The article is here.

Tuesday, September 5, 2017

Ethical behaviour of physicians and psychologists: similarities and differences

Ferencz Kaddari M, Koslowsky M, Weingarten MA
Journal of Medical Ethics Published Online First: 18 August 2017.

Abstract

Objective 

To compare the coping patterns of physicians and clinical psychologists when confronted with clinical ethical dilemmas and to explore consistency across different dilemmas.

Population 88 clinical psychologists and 149 family physicians in Israel.

Method 

Six dilemmas representing different ethical domains were selected from the literature. Vignettes were composed for each dilemma, and seven possible behavioural responses for each were proposed, scaled from most to least ethical. The vignettes were presented to both family physicians and clinical psychologists.

Results 

Psychologists’ aggregated mean ethical intention score, as compared with the physicians, was found to be significantly higher (F(6, 232)=22.44, p<0.001, η2=0.37). Psychologists showed higher ethical intent for two dilemmas: issues of payment (they would continue treating a non-paying patient while physicians would not) and dual relationships (they would avoid treating the son of a colleague). In the other four vignettes, psychologists and physicians responded in much the same way. The highest ethical intent scores for both psychologists and physicians were for confidentiality and a colleague's inappropriate practice due to personal problems.

Conclusions 

Responses to the dilemmas by physicians and psychologists can be categorised into two groups: (1) similar behaviours on the part of both professions when confronting dilemmas concerning confidentiality, inappropriate practice due to personal problems, improper professional conduct and academic issues and (2) different behaviours when confronting either payment issues or dual relationships.

The research is here.

Wednesday, August 16, 2017

Learning morality through gaming

Jordan Erica Webber
The Guardian
Originally published 13 August 2017

Here is an excerpt:

Whether or not you agree with Snowden’s actions, the idea that playing video games could affect a person’s ethical position or even encourage any kind of philosophical thought is probably surprising. Yet we’re used to the notion that a person’s thinking could be influenced by the characters and conundrums in books, film and television; why not games? In fact, games have one big advantage that makes them especially useful for exploring philosophical ideas: they’re interactive.

As any student of philosophy will tell you, one of the primary ways of engaging with abstract questions is through thought experiments. Is Schrödinger’s cat dead or alive? Would you kill one person to save five? A thought experiment presents an imagined scenario (often because it wouldn’t be viable to perform the experiment in real life) to test intuitions about the consequences.

Video games, too, are made up of counterfactual narratives that test the player: here is a scenario, what would you do? Unlike books, film and television, games allow you to act on your intuition. Can you kill a character you’ve grown to know over hours of play, if it would save others?

The article is here.

Wednesday, May 3, 2017

Teaching Ethics Should Be a STEM Essential

Ann Jolly
Middle Web
Originally posted October 11, 2015

Here is an excerpt:

Do you have ethics built into your STEM curriculum? What does that look like? For a start I’m envisioning kids in their teams debating solutions to problems, looking at possible consequences of those solutions, and examining the trade-offs they’d have to make.

Some types of real-world problems lend themselves readily to ethical deliberations. Proposed environmental solutions for cleaner air, for example, resulted in push-back from some industries that faced investing more money in equipment, and even from some citizens who feared a rise in price for the products these industries produce. So how do you lead your students through a productive discussion of these issues?

In my search for answers to that question I located a free Ethics Primer from the Northwest Association for Biomedical Research (downloadable as a PDF). This publication strongly recommends that the study of ethics begin through exploring a case study or a scenario.

A STEM lesson provides a perfect kickoff for an ethics discussion, since a scenario generally accompanies the real-world problem kids are trying to solve. From there, ethics principles and practices can be built naturally into the lesson.

The article is here.

Saturday, December 24, 2016

The Adaptive Utility of Deontology: Deontological Moral Decision-Making Fosters Perceptions of Trust and Likeability

Sacco, D.F., Brown, M., Lustgraaf, C.J.N. et al.
Evolutionary Psychological Science (2016).
doi:10.1007/s40806-016-0080-6

Abstract

Although various motives underlie moral decision-making, recent research suggests that deontological moral decision-making may have evolved, in part, to communicate trustworthiness to conspecifics, thereby facilitating cooperative relations. Specifically, social actors whose decisions are guided by deontological (relative to utilitarian) moral reasoning are judged as more trustworthy, are preferred more as social partners, and are trusted more in economic games. The current study extends this research by using an alternative manipulation of moral decision-making as well as the inclusion of target facial identities to explore the potential role of participant and target sex in reactions to moral decisions. Participants viewed a series of male and female targets, half of whom were manipulated to either have responded to five moral dilemmas consistent with an underlying deontological motive or utilitarian motive; participants indicated their liking and trust toward each target. Consistent with previous research, participants liked and trusted targets whose decisions were consistent with deontological motives more than targets whose decisions were more consistent with utilitarian motives; this effect was stronger for perceptions of trust. Additionally, women reported greater dislike for targets whose decisions were consistent with utilitarianism than men. Results suggest that deontological moral reasoning evolved, in part, to facilitate positive relations among conspecifics and aid group living and that women may be particularly sensitive to the implications of the various motives underlying moral decision-making.

The research is here.

Editor's Note: This research may apply to psychotherapy, leadership style, and politics.

Monday, November 28, 2016

Studying ethics, 'Star Trek' style, at Drake

Daniel P. Finney
The Des Moines Register
Originally posted November 10, 2016

Here is an excerpt:

Sure, the discussion was about ethics of the fictional universe of “Star Trek.” But fiction, like all art, reflects the human condition.

The issue Capt. Sisko wrestled with had parallels to the real world.

Some historians hold the controversial assertion that President Franklin D. Roosevelt knew of the impending attack on Pearl Harbor in 1941 but allowed it to happen to bring the United States into World War II, a move the public opposed before the attack.

In more recent times, former President George W. Bush’s administration used faulty intelligence suggesting Iraq possessed weapons of mass destruction to justify a war that many believed would stabilize the increasingly sectarian Middle East. It did not.

The article is here.

Tuesday, November 15, 2016

Scientists “Switch Off” Self-Control Using Brain Stimulation

By Catherine Caruso
Scientific American
Originally published on October 19, 2016

Imagine you are faced with the classic thought experiment dilemma: You can take a pile of money now or wait and get an even bigger stash of cash later on. Which option do you choose? Your level of self-control, researchers have found, may have to do with a region of the brain that lets us take the perspective of others—including that of our future self.

A study, published today in Science Advances, found that when scientists used noninvasive brain stimulation to disrupt a brain region called the temporoparietal junction (TPJ), people appeared less able to see things from the point of view of their future selves or of another person, and consequently were less likely to share money with others and more inclined to opt for immediate cash instead of waiting for a larger bounty at a later date.

The TPJ, which is located where the temporal and parietal lobes meet, plays an important role in social functioning, particularly in our ability to understand situations from the perspectives of other people. However, according to Alexander Soutschek, an economist at the University of Zurich and lead author on the study, previous research on self-control and delayed gratification has focused instead on the prefrontal brain regions involved in impulse control.

The article is here.

Tuesday, August 23, 2016

Patients on social media cause ethics headache for doctors

BY Lisa Rapaport
Reuters
Originally published August 5, 2016

As more and more sick patients are going online and using social media to search for answers about their health, it’s raising a lot of thorny ethical questions for doctors.

“The internet and ready access to vast amounts of information are now permanent aspects of how we live our lives, including how we think about and deal with our health problems,” Dr. Chris Feudtner, director of medical ethics at the Children's Hospital of Philadelphia, said by email.

Social media in particular can affect how patients interact with doctors and what type of care they expect, Feudtner and colleagues write in an article about ethics in the journal Pediatrics.

“Clinicians should ask about what patients and families have read on the Internet, and then work through that information thoughtfully, as sometimes Internet information is not helpful and sometimes it is helpful,” Feudtner said. “Doing this takes time and effort, yet trust is built with time and effort.”

The article is here.

Tuesday, January 26, 2016

A Therapist’s Fib

By Jonathan Schiff
The New York Times
Originally published January

An old New Yorker cartoon features a man suspended upside down from the ceiling, like a stalactite. A psychiatrist explains to the wife that the first objective is to convince the man that he is a stalagmite.

Funny — but it invites a serious question: Is it ever justified for a clinician to help a client to believe in a fiction?

The brief article is here.

Note: Is it ever ethical to lie to a patient?

Monday, December 7, 2015

Poker-faced morality: Concealing emotions leads to utilitarian decision making

Jooa Julia Lee, Francesca Gino
Organizational Behavior and Human Decision Processes Volume 126, 
January 2015, Pages 49–64

Abstract

This paper examines how making deliberate efforts to regulate aversive affective responses influences people’s decisions in moral dilemmas. We hypothesize that emotion regulation—mainly suppression and reappraisal—will encourage utilitarian choices in emotionally charged contexts and that this effect will be mediated by the decision maker’s decreased deontological inclinations. In Study 1, we find that individuals who endorsed the utilitarian option (vs. the deontological option) were more likely to suppress their emotional expressions. In Studies 2a, 2b, and 3, we instruct participants to either regulate their emotions, using one of two different strategies (reappraisal vs. suppression), or not to regulate, and we collect data through the concurrent monitoring of psycho-physiological measures. We find that participants are more likely to make utilitarian decisions when asked to suppress their emotions rather than when they do not regulate their affect. In Study 4, we show that one’s reduced deontological inclinations mediate the relationship between emotion regulation and utilitarian decision making.

The article is here.

Saturday, July 11, 2015

Does Brain Difference Affect Legal and Moral Responsibility?

HMS Center for Bioethics
Published on May 12, 2015

Brains create behavior. Yet we hold people, not brains, morally and legally responsible for their actions. Under what conditions could -- or should -- brain disorder affect the ways in which we assign moral and legal responsibility to a person?

In this conversation among a neuroscientist who studies moral judgement, a forensic psychiatrist, and a law professor, we explore three cases that highlight the relationship between brain disorder, law-breaking, and norms relating to responsibility.

Each case raises challenging questions: Can we establish whether the brain disorder caused the law-breaking behavior? Even if we can, is the presence of brain disorder morally or legally excusing? All behavior is caused: Why should some causes be excusing, but not others? If brain disorder can cause unlawful behavior, can we infer the reverse -- that people who behave unlawfully have disordered brains? Check out this provocative discussion on the state of the art at the intersection of neuroethics, brain science, philosophy, and the law.

 

Panel:

Dr. Fiery Cushman, Ph.D., is an assistant professor in the Department of Psychology at Harvard University. From 2011-2014 he served as a post-doctoral fellow in moral psychology, funded by the Mind, Brain and Behavior Initiative at Harvard University.

Dr. Judith Edersheim, MD, JD, is the Co-Founder and Co-Director of the Center for Law, Brain and Behavior, an Assistant Clinical Professor of Psychiatry at Harvard Medical School, and an attending Psychiatrist in the Department of Psychiatry at Massachusetts General Hospital.

Amanda Pustilnik, JD, is the Senior Fellow in Law & Applied Neuroscience at the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, a faculty member of the Center for Law, Brain, and Behavior at Massachusetts General Hospital, and an assistant professor of law at the University of Maryland School of Law.