Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, June 16, 2015

Affective basis of judgment-behavior discrepancy in virtual experiences of moral dilemmas

I. Patil, C. Cogoni, N. Zangrando, L. Chittaro, and G. Silani
Social Neuroscience, 2014
Vol. 9, No. 1, 94-107

Abstract

Although research in moral psychology in the last decade has relied heavily on hypothetical moral dilemmas and has been effective in understanding moral judgment, how these judgments translate into behaviors remains a largely unexplored issue due to the harmful nature of the acts involved. To study this link, we follow a new approach based on a desktop virtual reality environment. In our within-subjects experiment, participants exhibited an order-dependent judgment-behavior discrepancy across temporally separated sessions, with many of them behaving in utilitarian manner in virtual reality dilemmas despite their nonutilitarian judgments for the same dilemmas in textual descriptions. This change in decisions reflected in the autonomic arousal of participants, with dilemmas in virtual reality being perceived more emotionally arousing than the ones in text, after controlling for general differences between the two presentation modalities (virtual reality vs. text). This suggests that moral decision-making in hypothetical moral dilemmas is susceptible to contextual saliency of the presentation of these dilemmas.

The entire article is here.

The Coming Merge of Human and Machine Intelligence

By Jeff Stibel
Tufts Now
Originally published May 22, 2015

Here is an excerpt:

The reason that our brains are shrinking is simple: our biology is focused on survival, not intelligence. Larger brains were necessary to allow us to learn to use language, tools and all of the innovations that allowed our species to thrive. But now that we have become civilized—domesticated, if you will—certain aspects of intelligence are less necessary.

This is actually true of all animals: domesticated animals, including dogs, cats, hamsters and birds, have 10 to 15 percent smaller brains than their counterparts in the wild. Because brains are so expensive to maintain, large brain sizes are selected out when nature sees no direct survival benefit. It is an inevitable fact of life.

Fortunately, another influence has evolved over the past 20,000 years that is making us smarter even as our brains are shrinking: technology. Technology has allowed us to leapfrog evolution, enabling our brains and bodies to do things that were otherwise impossible biologically. We weren’t born with wings, but we’ve created airplanes, helicopters, hot air balloons and hang gliders. We don’t have sufficient natural strength or speed to bring down big game, but we’ve created spears, rifles and livestock farms.

The entire article is here.

Monday, June 15, 2015

The increasing lifestyle use of modafinil by healthy people: safety and ethical issues

By Sebastian Porsdam-Mann & Barbara J Sahakian
Current Opinion in Behavioral Sciences
Volume 4, August 2015, Pages 136–141

Pharmacological cognitive enhancers (PCEs) are used in the treatment of a variety of disorders, including targeting cognitive impairment and sleep abnormalities. Evidence suggests that PCEs also enhance cognition in healthy individuals. PCEs have attracted considerable interest recently, particularly from students, academics and the military. Proponents of PCE use in healthy people argue that these substances may be used to reduce fatigue-related and work-related accidents and improve learning outcomes.

In this article, safety concerns as well as ethical issues of fairness and coercion are considered. Discussion amongst experts in the field, government officials and members of society on the topic of the increasing lifestyle use of PCEs in healthy people is urgently needed.

The entire article is here.

Understanding ordinary unethical behavior: why people who value morality act immorally

by Francesca Gino
Current Opinion in Behavioral Sciences
Volume 3, June 2015, Pages 107–111

Cheating, deception, organizational misconduct, and many other forms of unethical behavior are among the greatest challenges in today's society. As regularly highlighted by the media, extreme cases and costly scams (e.g., Enron, Bernard Madoff) are common. Yet, even more frequent and pervasive are cases of ‘ordinary’ unethical behavior — unethical actions committed by people who value about morality but behave unethically when faced with an opportunity to cheat. A growing body of research in behavioral ethics and moral psychology shows that even good people (i.e., people who care about being moral) can and often do bad things. Examples include cheating on taxes, deceiving in interpersonal relationships, overstating performance and contributions to teamwork, inflating business expense reports, and lying in negotiations.

When considered cumulatively, ordinary unethical behavior causes considerable societal damage. For instance, employee theft causes U.S. companies to lose approximately $52 billion per year [4]. This empirical evidence is striking in light of social–psychological research that, for decades, has robustly shown that people typically value honesty, believe strongly in their own morality, and strive to maintain a positive self-image as moral individuals.

The entire article is here.

Sunday, June 14, 2015

The Evolutionary Roots of Morality and Professional Ethics

By John Gavazzi
Originally published in The Pennsylvania Psychologist

          Every aspect of human existence stems from biological and cultural evolution.  Even though evolutionary psychology is not a priority for clinical psychologists, the goal of this article is to highlight the evolutionary roots of human morals and professional ethics.  At the broadest level possible, morality is defined as the ability to differentiate between right and wrong or good and bad.  Most research in moral psychology highlights that many moral decisions are based on emotional responses and cognitive intuitions of right and wrong.  Moral judgments are typically affective, rapid, instinctive and unconscious.  The speedy cognitive processes and emotional responses are shortcuts intended to respond to environmental demands quickly and effectively.  Most individuals do not take long to determine if abortion is right or not; or if same-sex marriage is right or not.  How are our morals a function of evolution?

  Primatologist Frans de Waal (2013) attempted to answer this question in his book, The Bonobo and The Atheist.  The book is based on his work studying primates as well as other animals, like elephants.  According to de Waal, morality originated within animal relationships first, prior to homo sapiens culture.  He used observations to determine if there are any similarities between primates and humans in terms of morality.  Both are social creatures who depend on relationships to function more effectively in the world.  In order for primates to cooperate, form relationships, and work as groups, reciprocity and empathy are the two essential “pillars of morality” reported by de Wall.  Reciprocity encompasses the bidirectional nature of relationships, including concepts such as give and take, returning favors, and playing fairly.  Empathy, defined as the ability to understand and share the feelings of others, can occur at both the cognitive and affective levels.  In terms of cognitive empathy, a person or a primate needs to have the mental capacity to understand another group members’ perspective.  People and primates also need to gage or feel the emotions of others.  As an example of empathy, humans and primates can both see emotional pain in others, demonstrate distress at what they are witnessing, and seek to console the sufferer.

The entire article is here.

Saturday, June 13, 2015

Biological Biases Can Be Detrimental to Effective Treatment

By John Gavazzi
Originally published in The Pennsylvania Psychologist

During workshops on ethical decision-making, I typically take time to highlight cognitive and emotional factors that adversely affect clinical judgment and impede high quality psychotherapy.  In terms of cognitive heuristics that hamper effective treatment, the list includes the Fundamental Attribution Error, Trait Negativity Bias, the Availability Heuristic, and the Dunning-Krueger Effect.  Emotionally, a psychologist’s fear, anxiety, or disgust (also known as countertransference) can obstruct competent clinical judgment.  A PowerPoint presentation providing more details on these topics is on my SlideShare account found here.

Research from cognitive science and moral psychology demonstrates many of these heuristics and emotional reactions are automatic, intuitive, and unconscious.  The cognitive heuristics and emotional responses are shortcuts intended to evaluate and respond to environmental demands quickly and efficiently, which is not always conducive for optimal clinical judgment and ethical decision-making.  For better or worse, these cognitive and affective strategies are part of what makes us human.  It is incumbent upon psychologists to be aware of these limitations and work hard to remediate them in our professional roles.

Recent research by Lebowitz and Ahn (2014) provides insight into another cognitive bias that leads to potentially detrimental emotional responses.  Their research illustrates how a clinician’s perception as to the causes of mental health problems can undesirably influence his or her perceptions of patients.  The authors chose to investigate clinicians’ perceptions of patients when using a biological model of mental disorders.  The biological model supports the belief that genetics play an important role in the creation of mental distress; that central nervous system dysfunction is the most important component of the mental health disorder; and, because of these biological origins, a patient’s thoughts and behaviors are largely outside of the patient’s control.

The entire article is here.

Friday, June 12, 2015

Confirmation Bias and the Limits of Human Knowledge

By Peter Wehner
Commentary Magazine
Originally published May 27, 2015

Here is an excerpt:

Confirmation bias is something we can easily identify in others but find very difficult to detect in ourselves. (If you finish this piece thinking only of the blindness of those who disagree with you, you are proving my point.) And while some people are far more prone to it than others, it’s something none of us is fully free of. We all hold certain philosophical assumptions, whether we’re fully aware of them or not, and they create a prism through which we interpret events. Often those assumptions are not arrived at through empiricism; they are grounded in moral intuitions. And moral intuitions, while not sub-rational, are shaped by things other than facts and figures. “The heart has its reasons which reason itself does not know,” Pascal wrote. And often the heart is right.

Without such core intuitions, we could not hope to make sense of the world. But these intuitions do not stay broad and implicit: we use them to make concrete judgments in life. The consequences of those judgments offer real-world tests of our assumptions, and if we refuse to learn from the results then we have no hope of improving our judgments in the future.

The entire article is here.

Anticipating and Resisting the Temptation to Behave Unethically

Oliver J. Sheldon and Ayelet Fishbach
Published online before print May 22, 2015
doi: 10.1177/0146167215586196

Abstract

Ethical dilemmas pose a self-control conflict between pursuing immediate benefits through behaving dishonestly and pursuing long-term benefits through acts of honesty. Therefore, factors that facilitate self-control for other types of goals (e.g., health and financial) should also promote ethical behavior. Across four studies, we find support for this possibility. Specifically, we find that only under conditions that facilitate conflict identification—including the consideration of several decisions simultaneously (i.e., a broad decision frame) and perceived high connectedness to the future self—does anticipating a temptation to behave dishonestly in advance promote honesty. We demonstrate these interaction patterns between conflict identification and temptation anticipation in negotiation situations (Study 1), lab tasks (Study 2), and ethical dilemmas in the workplace (Studies 3-4). We conclude that identifying a self-control conflict and anticipating a temptation are two necessary preconditions for ethical decision making.

The article story is here.

Thursday, June 11, 2015

Goal-directed, habitual and Pavlovian prosocial behavior

Filip Gęsiarz and Molly J. Crockett
Front. Behav. Neurosci., 27 May 2015

Discussion

In this review we summarized evidence showing how the RLDM framework can integrate diverse findings describing what motivates prosocial behaviors. We suggested that the goal-directed system, given sufficient time and cognitive resources, weighs the costs of prosocial behaviors against their benefits, and chooses the action that best serves one’s goals, whether they be to merely maintain a good reputation or to genuinely enhance the welfare of another. We also suggested that to appreciate some of the benefits of other-regarding acts, such as the possibility of reciprocity, agents must have a well-developed theory of mind and an ability to foresee the cumulative value of future actions—both of which seem to involve model-based computations.

Furthermore, we reviewed findings demonstrating that the habitual system encodes the consequences of social interactions in the form of prediction errors and uses these signals to update the expected value of actions. Repetition of prosocial acts, resulting in positive outcomes, gradually increases their expected value and can lead to the formation of prosocial habits, which are performed without regard to their consequences. We speculated that the expected value of actions on a subjective level might be experienced as a ‘warm glow’ (Andreoni, 1990), linking our proposition to the behavioral economics literature. We also suggested that the notion of prosocial habits shares many features of the social heuristics hypothesis (Rand et al., 2014), implying that the habitual system could be a possible neurocognitive mechanism explaining the expression of social heuristics.

Finally, we have posited that the Pavlovian system, in response to another’s distress cues, evokes an automatic approach response towards stimuli enhancing another’s well-being—even if that response brings negative consequences.

The entire article is here.