Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, April 2, 2017

The Problem of Evil: Crash Course Philosophy #13

Published on May 9, 2016

After weeks of exploring the existence of nature of god, today Hank explores one of the biggest problems in theism, and possibly the biggest philosophical question humanity faces: why is there evil?


Saturday, April 1, 2017

Does everyone have a price? On the role of payoff magnitude for ethical decision making

Benjamin E. Hilbig and Isabel Thielmann
Cognition
Volume 163, June 2017, Pages 15–25

Abstract

Most approaches to dishonest behavior emphasize the importance of corresponding payoffs, typically implying that dishonesty might increase with increasing incentives. However, prior evidence does not appear to confirm this intuition. However, extant findings are based on relatively small payoffs, the potential effects of which are solely analyzed across participants. In two experiments, we used different multi-trial die-rolling paradigms designed to investigate dishonesty at the individual level (i.e., within participants) and as a function of the payoffs at stake – implementing substantial incentives exceeding 100€. Results show that incentive sizes indeed matter for ethical decision making, though primarily for two subsets of “corruptible individuals” (who cheat more the more they are offered) and “small sinners” (who tend to cheat less as the potential payoffs increase). Others (“brazen liars”) are willing to cheat for practically any non-zero incentive whereas still others (“honest individuals”) do not cheat at all, even for large payoffs. By implication, the influence of payoff magnitude on ethical decision making is often obscured when analyzed across participants and with insufficiently tempting payoffs.

The article is here.

Bannon May Have Violated Ethics Pledge by Communicating With Breitbart

Lachlan Markay
Daily Beast
Originally published March 30, 2017

Here is an excerpt:

Bannon, Breitbart’s former chairman, has spoken directly to two of the company’s top editors since joining the White House. Trump’s predecessor publicly waived portions of the ethics pledge for similar communications, but the White House confirmed this week that it has not done so for Bannon.

“It seems to me to be a very clear violation,” Richard Painter, who was White House counsel for President George W. Bush, told The Daily Beast in an interview.

A White House spokesperson confirmed that every Trump appointee has signed the ethics pledge required by an executive order imposed by the president in January. No White House employees have received waivers to the pledge, the spokesperson added.

All incoming appointees are required to certify that they “will not for a period of 2 years from the date of my appointment participate in any particular matter involving specific parties that is directly and substantially related to my former employer or former clients.”

The article is here.

Friday, March 31, 2017

Dishonesty gets easier on the brain the more you do it

Neil Garrett
Aeon
Originally published March 7, 2017

Here are two excerpts:

These two ideas – the role of arousal on our willingness to cheat, and neural adaptation – are connected because the brain does not just adapt to things such as sounds and smells. The brain also adapts to emotions. For example, when presented with aversive pictures (eg, threatening faces) or receiving something unpleasant (eg, an electric shock), the brain will initially generate strong responses in regions associated with emotional processing. But when these experiences are repeated over time, the emotional responses diminish.

(cut)

There have also been a number of behavioural interventions proposed to curb unethical behaviour. These include using cues that emphasise morality and encouraging self-engagement. We don’t currently know the underlying neural mechanisms that can account for the positive behavioural changes these interventions drive. But an intriguing possibility is that they operate in part by shifting up our emotional reaction to situations in which dishonesty is an option, in turn helping us to resist the temptation to which we have become less resistant over time.

The article is here.

Signaling Emotion and Reason in Cooperation

Levine, Emma Edelman and Barasch, Alixandra and Rand, David G. and Berman, Jonathan Z. and Small, Deborah A. (February 23, 2017).

Abstract

We explore the signal value of emotion and reason in human cooperation. Across four experiments utilizing dyadic prisoner dilemma games, we establish three central results. First, individuals believe that a reliance on emotion signals that one will cooperate more so than a reliance on reason. Second, these beliefs are generally accurate — those who act based on emotion are more likely to cooperate than those who act based on reason. Third, individuals’ behavioral responses towards signals of emotion and reason depends on their own decision mode: those who rely on emotion tend to conditionally cooperate (that is, cooperate only when they believe that their partner has cooperated), whereas those who rely on reason tend to defect regardless of their partner’s signal. These findings shed light on how different decision processes, and lay theories about decision processes, facilitate and impede cooperation.

Available at SSRN: https://ssrn.com/abstract=2922765

Editor's note: This research has implications for developing the therapeutic relationship.

Thursday, March 30, 2017

Risk considerations for suicidal physicians

Doug Brunk
Clinical Psychiatry News
Publish date: February 27, 2017

Here are two excerpts:

According to the American Foundation for Suicide Prevention, 300-400 physicians take their own lives every year, the equivalent of two to three medical school classes. “That’s a doctor a day we lose to suicide,” said Dr. Myers, a professor of clinical psychiatry at State University of New York, Brooklyn, who specializes in physician health. Compared with the general population, the suicide rate ratio is 2.27 among female physicians and 1.41 among male physicians (Am J Psychiatry. 2004;161[12]:2295-2302), and an estimated 85%-90% of those who carry out a suicide have a psychiatric illness such as major depressive disorder, bipolar disorder, alcohol use and substance use disorder, and borderline personality disorder. Other triggers common to physicians, Dr. Myers said, include other kinds of personality disorders, burnout, untreated anxiety disorders, substance/medication-induced depressive disorder (especially in clinicians who have been self-medicating), and posttraumatic stress disorder.

(cut)

Inadequate treatment can occur for physician patients because of transference and countertransference dynamics “that muddle the treatment dyad,” Dr. Myers added. “We must be mindful of the many issues that are going on when we treat our own.”

Association Between Physician Burnout and Identification With Medicine as a Calling

Andrew J. Jager, MA, Michael A. Tutty, PhD, Audiey C. Kao, PhD Audiey C. Kao
Mayo Clinic Proceedings
DOI: http://dx.doi.org/10.1016/j.mayocp.2016.11.012

Objective

To evaluate the association between degree of professional burnout and physicians' sense of calling.

Participants and Methods

US physicians across all specialties were surveyed between October 24, 2014, and May 29, 2015. Professional burnout was assessed using a validated single-item measure. Sense of calling, defined as committing one's life to personally meaningful work that serves a prosocial purpose, was assessed using 6 validated true-false items. Associations between burnout and identification with calling items were assessed using multivariable logistic regressions.

Results

A total of 2263 physicians completed surveys (63.1% response rate). Among respondents, 28.5% (n=639) reported experiencing some degree of burnout. Compared with physicians who reported no burnout symptoms, those who were completely burned out had lower odds of finding their work rewarding (odds ratio [OR], 0.05; 95% CI, 0.02-0.10; P<.001), seeing their work as one of the most important things in their lives (OR, 0.38; 95% CI, 0.21-0.69; P<.001), or thinking their work makes the world a better place (OR, 0.38; 95% CI, 0.17-0.85; P=.02). Burnout was also associated with lower odds of enjoying talking about their work to others (OR, 0.23; 95% CI, 0.13-0.41; P<.001), choosing their work life again (OR, 0.11; 95% CI, 0.06-0.20; P<.001), or continuing with their current work even if they were no longer paid if they were financially stable (OR, 0.30; 95% CI, 0.15-0.59; P<.001).

Conclusion

Physicians who experience more burnout are less likely to identify with medicine as a calling. Erosion of the sense that medicine is a calling may have adverse consequences for physicians as well as those for whom they care.

Wednesday, March 29, 2017

Neuroethics and the Ethical Parity Principle

DeMarco, J.P. & Ford, P.J.
Neuroethics (2014) 7: 317.
doi:10.1007/s12152-014-9211-6

Abstract

Neil Levy offers the most prominent moral principles that are specifically and exclusively designed to apply to neuroethics. His two closely related principles, labeled as versions of the ethical parity principle (EPP), are intended to resolve moral concerns about neurological modification and enhancement [1]. Though EPP is appealing and potentially illuminating, we reject the first version and substantially modify the second. Since his first principle, called EPP (strong), is dependent on the contention that the mind literally extends into external props such as paper notebooks and electronic devices, we begin with an examination of the extended mind hypothesis (EMH) and its use in Levy’s EPP (strong). We argue against reliance on EMH as support for EPP (strong). We turn to his second principle, EPP (weak), which is not dependent on EMH but is tied to the acceptable claim that the mind is embedded in, because dependent on, external props. As a result of our critique of EPP (weak), we develop a modified version of EPP (weak), which we argue is more acceptable than Levy’s principle. Finally, we evaluate the applicability of our version of EPP (weak).

The article is here.

Philosopher Daniel Dennett on AI, robots and religion

John Thornhill
Financial Times
Originally published March 3, 2017

Here are two excerpts:

AI experts tend to draw a sharp distinction between machine intelligence and human consciousness. Dennett is not so sure. Where many worry that robots are becoming too human, he argues humans have always been largely robotic. Our consciousness is the product of the interactions of billions of neurons that are all, as he puts it, “sorta robots”.

“I’ve been arguing for years that, yes, in principle it’s possible for human consciousness to be realised in a machine. After all, that’s what we are,” he says. “We’re robots made of robots made of robots. We’re incredibly complex, trillions of moving parts. But they’re all non-miraculous robotic parts.”

(cut)

The term “inversion of reason”, he says, came from one of Darwin’s 19th-century critics, outraged at the biologist’s counterintuitive thinking. Rather than accepting that an absolute intelligence was responsible for the creation of species, the critic denounced Darwin for believing that absolute ignorance had accomplished all the marvels of creative skill. “And of course that’s right. That’s exactly what Darwin was saying. Darwin says the nightingale is created by a process with no intelligence at all. So that’s the first inversion of reasoning.”

The article is here.