Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, philosophy and health care

Saturday, June 24, 2017

Consistent Belief in a Good True Self in Misanthropes and Three Interdependent Cultures.

J. De Freitas, H. Sarkissian, G. E. Newman, I. Grossmann, and others
Cognitive Science, 2017 Jun 6.


People sometimes explain behavior by appealing to an essentialist concept of the self, often referred to as the true self. Existing studies suggest that people tend to believe that the true self is morally virtuous; that is deep inside, every person is motivated to behave in morally good ways. Is this belief particular to individuals with optimistic beliefs or people from Western cultures, or does it reflect a widely held cognitive bias in how people understand the self? To address this question, we tested the good true self theory against two potential boundary conditions that are known to elicit different beliefs about the self as a whole. Study 1 tested whether individual differences in misanthropy-the tendency to view humans negatively-predict beliefs about the good true self in an American sample. The results indicate a consistent belief in a good true self, even among individuals who have an explicitly pessimistic view of others. Study 2 compared true self-attributions across cultural groups, by comparing samples from an independent country (USA) and a diverse set of interdependent countries (Russia, Singapore, and Colombia). Results indicated that the direction and magnitude of the effect are comparable across all groups we tested. The belief in a good true self appears robust across groups varying in cultural orientation or misanthropy, suggesting a consistent psychological tendency to view the true self as morally good.

A version of the paper is here.

Friday, June 23, 2017

Speaking up about traditional and professionalism-related patient safety threats: a national survey of interns and residents

Martinez W, Lehmann LS, Thomas EJ, et al
BMJ Qual Saf Published Online First: 25 April 2017.

Background Open communication between healthcare professionals about care concerns, also known as ‘speaking up’, is essential to patient safety.

Objective Compare interns' and residents' experiences, attitudes and factors associated with speaking up about traditional versus professionalism-related safety threats.

Design Anonymous, cross-sectional survey.

Setting Six US academic medical centres, 2013–2014.

Participants 1800 medical and surgical interns and residents (47% responded).

Measurements Attitudes about, barriers and facilitators for, and self-reported experience with speaking up. Likelihood of speaking up and the potential for patient harm in two vignettes. Safety Attitude Questionnaire (SAQ) teamwork and safety scales; and Speaking Up Climate for Patient Safety (SUC-Safe) and Speaking Up Climate for Professionalism (SUC-Prof) scales.

Results Respondents more commonly observed unprofessional behaviour (75%, 628/837) than traditional safety threats (49%, 410/837); p<0.001, but reported speaking up about unprofessional behaviour less commonly (46%, 287/628 vs 71%, 291/410; p<0.001). Respondents more commonly reported fear of conflict as a barrier to speaking up about unprofessional behaviour compared with traditional safety threats (58%, 482/837 vs 42%, 348/837; p<0.001). Respondents were also less likely to speak up to an attending physician in the professionalism vignette than the traditional safety vignette, even when they perceived high potential patient harm (20%, 49/251 vs 71%, 179/251; p<0.001). Positive perceptions of SAQ teamwork climate and SUC-Safe were independently associated with speaking up in the traditional safety vignette (OR 1.90, 99% CI 1.36 to 2.66 and 1.46, 1.02 to 2.09, respectively), while only a positive perception of SUC-Prof was associated with speaking up in the professionalism vignette (1.76, 1.23 to 2.50).

Conclusions Interns and residents commonly observed unprofessional behaviour yet were less likely to speak up about it compared with traditional safety threats even when they perceived high potential patient harm. Measuring SUC-Safe, and particularly SUC-Prof, may fill an existing gap in safety culture assessment.

The article is here.

Moral Injury, Posttraumatic Stress Disorder, and Suicidal Behavior Among National Guard Personnel.

Craig Bryan, Anna Belle Bryan, Erika Roberge, Feea Leifker, & David Rozek
Psychological Trauma: Theory, Research, Practice, and Policy 


To empirically examine similarities and differences in the signs and symptoms of posttraumatic stress disorder (PTSD) and moral injury and to determine if the combination of these 2 constructs is associated with increased risk for suicidal thoughts and behaviors in a sample of U.S. National Guard personnel. Method: 930 National Guard personnel from the states of Utah and Idaho completed an anonymous online survey. Exploratory structural equation modeling (ESEM) was used to test a measurement model of PTSD and moral injury. A structural model was next constructed to test the interactive effects of PTSD and moral injury on history of suicide ideation and attempts. Results: Results of the ESEM confirmed that PTSD and moral injury were distinct constructs characterized by unique symptoms, although depressed mood loaded onto both PTSD and moral injury. The interaction of PTSD and moral injury was associated with significantly increased risk for suicide ideation and attempts. A sensitivity analysis indicated the interaction remained a statistically significant predictor of suicide attempt even among the subgroup of participants with a history of suicide ideation. Conclusion: PTSD and moral injury represent separate constructs with unique signs and symptoms. The combination of PTSD and moral injury confers increased risk for suicidal thoughts and behaviors, and differentiates between military personnel who have attempted suicide and those who have only thought about suicide.

The article is here.

Thursday, June 22, 2017

Is it dangerous for humans to depend on computers?

Rory Cellan-Jones
BBC News
Originally published June 5, 2017

Here is an excerpt:

In Britain, doctors whose computers froze during the recent ransomware attack had to turn patients away. In Ukraine, there were power cuts when hackers attacked the electricity system, and five years ago, millions of Royal Bank of Scotland customers were unable to get at their money for days after problems with a software upgrade.

Already some people have had enough. This week a letter to the Guardian newspaper warned that the modern world was "dangerously exposed by this reliance on the internet and new technology".
The correspondent, quite possibly a retired government employee, continued "there are just enough old-time civil servants left alive to turn back the clock and take away our dangerous dependence on modern technology."

Somehow, though, I don't see this happening. Airlines are not going to scrap the computers and tick passengers off on a paper list before they climb aboard, bank clerks will not be entering transactions in giant ledgers in copperplate writing.

In fact, computers will take over more and more functions once restricted to humans, most of them far more useful than a game of Go. And that means that at home, at work and at play we will have to get used to seeing our lives disrupted when those clever machines suffer the occasional nervous breakdown.

The article is here.

Teaching Humility in an Age of Arrogance

Michael Patrick Lynch
The Chronicle of Higher Education
Originally published June 5, 2017

Here is an excerpt:

Our cultural embrace of epistemic or intellectual arrogance is the result of a toxic mix of technology, psychology, and ideology. To combat it, we have to reconnect with some basic values, including ones that philosophers have long thought were essential both to serious intellectual endeavors and to politics.

One of those ideas, as I just noted, is belief in objective truth. But another, less-noted concept is intellectual humility. By intellectual humility, I refer to a cluster of attitudes that we can take toward ourselves — recognizing your own fallibility, realizing that you don’t really know as much as you think, and owning your limitations and biases.

But being intellectually humble also means taking an active stance. It means seeing your worldview as open to improvement by the evidence and experience of other people. Being open to improvement is more than just being open to change. And it isn’t just a matter of self-improvement — using your genius to know even more. It is a matter of seeing your view as capable of improvement because of what others contribute.

Intellectual humility is not the same as skepticism. Improving your knowledge must start from a basis of rational conviction. That conviction allows you to know when to stop inquiring, when to realize that you know enough — that the earth really is round, the climate is warming, the Holocaust happened, and so on. That, of course, is tricky, and many a mistake in science and politics have been made because someone stopped inquiring before they should have. Hence the emphasis on evidence; being intellectually humble requires being responsive to the actual evidence, not to flights of fancy or conspiracy theories.

The article is here.

Wednesday, June 21, 2017

The Specialists’ Stranglehold on Medicine

Jamie Koufman
The New York Times - Opinion
Originally posted June 3, 2017

Here is an excerpt:

Neither the Affordable Care Act nor the Republicans’ American Health Care Act addresses the way specialists are corrupting our health care system. What we really need is what I’d call a Health Care Accountability Act.

This law would return primary care to the primary care physician. Every patient should have one trusted doctor who is responsible for his or her overall health. Resources must be allocated to expand those doctors’ education and training. And then we have to pay them more.

There are approximately 860,000 practicing physicians in the United States today, and too few — about a third — deliver primary care. In general, they make less than half as much money as specialists. I advocate a 10 percent to 20 percent reduction in specialist reimbursement, with that money being allocated to primary care doctors.

Those doctors should have to approve specialist referrals — they would be the general contractor in the building metaphor. There is strong evidence that long-term oversight by primary care doctors increases the quality of care and decreases costs.

The bill would mandate the disclosure of procedures’ costs up front. The way it usually works now is that right before a medical procedure, patients are asked to sign multiple documents, including a guarantee that they will pay whatever is not covered by insurance.  But they will have no way of knowing what the procedure actually costs. Their insurance may cover 90 percent, but are they liable for 10 percent of $10,000 or $100,000?

We also need more oversight of those costs. Instead of letting specialists’ lobbyists set costs, payment algorithms should be determined by doctors with no financial stake in the field, or even by non-physicians like economists. An Independent Payment Advisory Board was created by Obamacare; it should be expanded and adequately funded.

The article is here.

The GOP's risky premium pledge

Jennifer Haberkorn
Originally posted June 5, 2017

Senate Republicans may be all over the map on an Obamacare repeal plan, but on one fundamental point — reducing insurance premiums — they are in danger of overpromising and underdelivering.

The reality is they have only a few ways to reduce Americans’ premiums: Offer consumers bigger subsidies. Allow insurers to offer skimpier coverage. Or permit insurers to charge more — usually much more — to those with pre-existing illnesses and who are older and tend to rack up the biggest bills.

Since there’s no appetite within the GOP for throwing more taxpayer money at the problem, Republicans will need to make some hard decisions to hit their goal. But the effort to drive down premium prices will inevitably create a new set of winners and losers and complicate leadership’s path to the 50 votes they need to fulfill their seven-year promise to repeal Obamacare.

“Anyone can figure out how to reduce premiums,” said Sen. Chris Murphy (D-Conn.). “You can reduce premiums by kicking everybody that has a pre-existing condition off insurance or dramatically reducing benefits.”

Republicans say that Obamacare’s insurance regulations are responsible for making coverage prohibitively expensive and contend that premiums would fall if those rules are rolled back. They say they have multiple ideas about how to roll those back while also insulating the most vulnerable but have yet to weave those together into actual legislation.

The article is here.

Tuesday, June 20, 2017

Face-saving or fair-minded: What motivates moral behavior?

Alexander W. Cappelen  Trond Halvorsen  Erik Ø. Sørensen  Bertil Tungodden
Journal of the European Economic Association (2017) 15 (3): 540-557.


We study the relative importance of intrinsic moral motivation and extrinsic social motivation in explaining moral behavior. The key feature of our experiment is that we introduce a dictator game design that manipulates these two sources of motivation. In one set of treatments, we manipulate the moral argument for sharing, in another we manipulate the information given to the recipient about the context of the experiment and the dictator's decision. The paper offers two main findings. First, we provide evidence of intrinsic moral motivation being of fundamental importance. Second, we show that extrinsic social motivation matters and is crowding-in with intrinsic moral motivation. We also show that intrinsic moral motivation is strongly associated with self-reported charitable giving outside the lab and with political preferences.

The research is here.

Theory from the ruins

Stuart Walton
Originally posted May 31, 2017

Here is an excerpt:

When reason enabled human beings to interpret the natural world around them in ways that ceased to frighten them, it was a liberating faculty of the mind. However, in the Frankfurt account, its fatal flaw was that it depended on domination, on subjecting the external world to the processes of abstract thought. Eventually, by a gradual process of trial and error, everything in the phenomenal world would be explained by scientific investigation, which would lay bare the previously hidden rules and principles by which it operated, and which could be demonstrated anew any number of times. The rationalising faculty had thereby become, according to the Frankfurt philosophers, a tyrannical process, through which all human experience of the world would be subjected to infinitely repeatable rational explanation; a process in which reason had turned from being liberating to being the instrumental means of categorising and classifying an infinitely various reality.

Culture itself was subject to a kind of factory production in the cinema and recording industries. The Frankfurt theorists maintained a deep distrust of what passed as ‘popular culture’, which neither enlightened nor truly entertained the mass of society, but only kept people in a state of permanently unsatiated demand for the dross with which they were going to be fed anyway. And driving the whole coruscating analysis was a visceral commitment to the Marxist theme of the presentness of the past. History was not just something that happened yesterday, but a dynamic force that remained active in the world of today, which was its material product and its consequence. By contrast, the attitude of instrumental reason produced only a version of the past that ascended towards the triumph of the enlightened and democratic societies of the present day.

Since these ideas were first elaborated, they have been widely rejected or misunderstood. Postmodernism, which refuses all historical grand narratives, has done its best to overlook what are some of the grandest narratives that Western philosophy ever produced. Despite this, these polemical theories remain indispensable in the present globalised age, when the dilemmas and malaises that were once specific to Western societies have expanded to encompass almost the whole globe. As a new era of irrationalism dawns on humankind, with corruption and mendacity becoming a more or less openly avowed modus operandi of all shades of government, the Frankfurt analysis urges itself upon us once more.

The article is here.

Monday, June 19, 2017

The Value of Sharing Information: A Neural Account of Information Transmission

Elisa C. Baek, Christin Scholz, Matthew Brook O’Donnell, & Emily Falk
Psychological Science
May 2017


Humans routinely share information with one another. What drives this behavior? We used neuroimaging to test an account of information selection and sharing that emphasizes inherent reward in self-reflection and connecting with other people. Participants underwent functional MRI while they considered personally reading and sharing New York Times articles. Activity in neural regions involved in positive valuation, self-related processing, and taking the perspective of others was significantly associated with decisions to select and share articles, and scaled with preferences to do so. Activity in all three sets of regions was greater when participants considered sharing articles with other people rather than selecting articles to read themselves. The findings suggest that people may consider value not only to themselves but also to others even when selecting news articles to consume personally. Further, sharing heightens activity in these pathways, in line with our proposal that humans derive value from self-reflection and connecting to others via sharing.

The article is here.

The behavioral and neural basis of empathic blame

Indrajeet Patil, Marta Calò, Federico Fornasier, Fiery Cushman, Giorgia Silani
Forthcoming in Scientific Reports


Mature moral judgments rely both on a perpetrator’s intent to cause harm, and also on the actual harm caused—even when unintended. Much prior research asks how intent information is represented neurally, but little asks how even unintended harms influence judgment. We interrogate the psychological and neural basis of this process, focusing especially on the role of empathy for the victim of a harmful act. Using fMRI, we found that the ‘empathy for pain’ network was involved in encoding harmful outcomes and integrating harmfulness information for different types of moral judgments, and individual differences in the extent to which this network was active during encoding and integration of harmfulness information determined severity of moral judgments. Additionally, activity in the network was down-regulated for acceptability, but not blame, judgments for accidental harm condition, suggesting that these two types of moral evaluations are neurobiologically dissociable. These results support a model of “empathic blame”, whereby the perceived suffering of a victim colors moral judgment of an accidental harmdoer.

The paper is here.

Sunday, June 18, 2017

Has Physician-Assisted Death Become the “Good Death?”

Franklin G. Miller
The Hastings Center
Originally published May 30, 2017

“Death with dignity” for the past 40 years has meant, for many people, avoiding unwanted medical technology and dying in a hospital.  A “natural” death at home or in a hospice facility has been the goal.   During the last 20 years, physician-assisted suicide has been legalized for terminally ill patients in several states of the United States, and recently “medical assistance in dying,” which also includes active euthanasia, has become legal in Canada.  How should we think about what constitutes a good death now?

There are signs of a cultural shift, in which physician-assisted death is not just a permitted choice by which individuals can control the timing and circumstances of their death but is taken as a model of the good death.  A recent lengthy front page article in the New York Times recounts a case of physician-assisted death in Canada in a way that strongly suggests that a planned, orchestrated death is the ideal way to die.  While I have long supported a legal option of physician-assisted suicide for the terminally ill, I believe that this cultural shift deserves critical scrutiny.

The article is here.

Saturday, June 17, 2017

Taking Single-Payer Seriously

Dave Kamper
Jacobin Magazine
Originally published May 28, 2017

Here is an excerpt:

Medicare for All wouldn’t just scrap Obamacare — it would uproot the entire industry. It would be a huge efficiency savings. But it would also be devastating in the short term for hundreds of thousands of working people whose only crime was getting a job at an insurance company, and the hundreds of thousands more who work as billing specialists for clinics and hospitals (the number of medical assistants shot up 44 percent between 2011 and 2016). Yes, the CEO of United Health Group made $101 million in 2011. But few of the 230,000 other people working for the company saw money like that.

Bernie Sanders’s recently announced Medicare for All plan asserts that we “need a health care system that significantly reduces overhead, administrative costs, and complexity,” and projects that his plan would save $6 trillion over ten years.

The article is here.

Friday, June 16, 2017

Do You Want to Be a Cyborg?

Agata Sagan and Peter Singer
Project Syndicate
Originally posted May 17, 2017

Here is an excerpt:

In the United States, Europe, and most other countries with advanced biomedical research, strict regulations on the use of human subjects would make it extremely difficult to get permission to carry out experiments aimed at enhancing our cognitive abilities by linking our brains to computers. US regulations drove Phil Kennedy, a pioneer in the use of computers to enable paralyzed patients to communicate by thought alone, to have electrodes implanted in his own brain in order to make further scientific progress. Even then, he had to go to Belize, in Central America, to find a surgeon willing to perform the operation. In the United Kingdom, cyborg advocate Kevin Warwick and his wife had data arrays implanted in their arms to show that direct communication between the nervous systems of separate human beings is possible.

Musk has suggested that the regulations governing the use of human subjects in research could change. That may take some time. Meanwhile freewheeling enthusiasts are going ahead anyway. Tim Cannon doesn’t have the scientific or medical qualifications of Phil Kennedy or Kevin Warwick, but that hasn’t stopped him from co-founding a Pittsburgh company that implants bionic devices, often after he has first tried them out on himself. His attitude is, as he said at an event billed as “The world’s first cyborg-fair,” held in Düsseldorf in 2015, “Let’s just do it and really go for it.”

People at the Düsseldorf cyborg-fair had magnets, radio frequency identification chips, and other devices implanted in their fingers or arms. The surgery is often carried out by tattooists and sometimes veterinarians, because qualified physicians and surgeons are reluctant to operate on healthy people.

The article is here.

On What Basis Do Terrorists Make Moral Judgments?

Kendra Pierre-Louis
Popular Science
Originally published May 26, 2017

Here is an excerpt:

“Multiple studies across the world have systematically shown that in judging the morality of an action, civilized individuals typically attach greater importance to intentions than outcomes,” Ibáñez told PopSci. “If an action is aimed to induce harm, it does not matter whether it was successful or not: most people consider it as less morally admissible than other actions in which harm was neither intended nor inflicted, or even actions in which harm was caused by accident.”

For most of us, intent matters. If I mean to slam you to the ground and I fail, that’s far worse than if I don’t mean to slam you to the ground and I do. If that sounds like a no-brainer, you should know that for the terrorists in the study, the morality was flipped. They rated accidental harm as worse than the failed intentional harm, because in one situation someone doesn’t get hurt, while in the second situation someone does. Write the study’s authors, “surprisingly, this moral judgement resembles that observed at early development stages.”

Perhaps more chilling, this tendency to focus on the outcomes rather than the underlying intention means that the terrorists are focused more on outcomes than your average person, and that terror behavior is "goal directed." Write the study's authors "... our sample is characterized by a general tendency to focus more on the outcomes of actions than on the actions' underlying intentions." In essence terrorism is the world's worst productivity system, because when coupled with rational choice theory—which says that we tend to act in ways that maximize getting our way with the least amount of personal sacrifice—murdering a lot of people to get your goal, absent moral stigma, starts to make sense.

The article is here.

Thursday, June 15, 2017

How the Science of “Blue Lies” May Explain Trump’s Support

Jeremy Adam Smith
Scientific American
Originally posted on March 24, 2017

Here are two excerpts:

This has led many people to ask themselves: How does the former reality-TV star get away with it? How can he tell so many lies and still win support from many Americans?

Journalists and researchers have suggested many answers, from a hyperbiased, segmented media to simple ignorance on the part of GOP voters. But there is another explanation that no one seems to have entertained. It is that Trump is telling “blue lies”—a psychologist’s term for falsehoods, told on behalf of a group, that can actually strengthen bonds among the members of that group.


This research—and these stories—highlights a difficult truth about our species: we are intensely social creatures, but we are prone to divide ourselves into competitive groups, largely for the purpose of allocating resources. People can be prosocial—compassionate, empathetic, generous, honest—in their group and aggressively antisocial toward out-groups. When we divide people into groups, we open the door to competition, dehumanization, violence—and socially sanctioned deceit.

“People condone lying against enemy nations, and since many people now see those on the other side of American politics as enemies, they may feel that lies, when they recognize them, are appropriate means of warfare,” says George Edwards, a political scientist at Texas A&M University and one of the country’s leading scholars of the presidency.

The article is here.

Act Versus Impact: Conservatives and Liberals Exhibit Different Structural Emphases in Moral Judgment

Ivar R. Hannikainen, M. Miller, A. Cushman
Ratio. (2017 )


Conservatives and liberals disagree sharply on matters of morality and public policy. We propose a novel account of the psychological basis of these differences. Specifically, we find that conservatives tend to emphasize the intrinsic value of actions during moral judgment, in part by mentally simulating themselves performing those actions, while liberals instead emphasize the value of the expected outcomes of the action. We then demonstrate that a structural emphasis on actions is linked to the condemnation of victimless crimes, a distinctive feature of conservative morality. Next, we find that the conservative and liberal structural approaches to moral judgment are associated with their corresponding patterns of reliance on distinct moral foundations. In addition, the structural approach uniquely predicts that conservatives will be more opposed to harm in circumstances like the well-known trolley problem, a result which we replicate. Finally, we show that the structural approaches of conservatives and liberals are partly linked to underlying cognitive styles (intuitive versus deliberative). Collectively, these findings forge a link between two important yet previously independent lines of research in political psychology: cognitive style and moral foundations theory.

The article is here.

Wednesday, June 14, 2017

You’re Not Going to Change Your Mind

Ben Tappin, Leslie Van Der Leer and Ryan McKay
The New York Times
Originally published May 28, 2017

A troubling feature of political disagreement in the United States today is that many issues on which liberals and conservatives hold divergent views are questions not of value but of fact. Is human activity responsible for global warming? Do guns make society safer? Is immigration harmful to the economy?

Though undoubtedly complicated, these questions turn on empirical evidence. As new information emerges, we ought to move, however fitfully, toward consensus.

But we don’t. Unfortunately, people do not always revise their beliefs in light of new information. On the contrary, they often stubbornly maintain their views. Certain disagreements stay entrenched and polarized.

Why? A common explanation is confirmation bias. This is the psychological tendency to favor information that confirms our beliefs and to disfavor information that counters them — a tendency manifested in the echo chambers and “filter bubbles” of the online world.

If this explanation is right, then there is a relatively straightforward solution to political polarization: We need to consciously expose ourselves to evidence that challenges our beliefs to compensate for our inclination to discount it.

But what if confirmation bias isn’t the only culprit?

The article is here.

Should We Outsource Our Moral Beliefs to Others?

Grace Boey
3 Quarks Daily
Originally posted May 29, 2017

Here is an excerpt:

Setting aside the worries above, there is one last matter that many philosophers take to be the most compelling candidate for the oddity of outsourcing our moral beliefs to others. As moral agents, we’re interested in more than just accumulating as many true moral beliefs as possible, such as ‘abortion is permissible’, or ‘killing animals for sport is wrong’. We also value things such as developing moral understanding, cultivating virtuous characters, having appropriate emotional reactions, and the like. Although moral deference might allow us to acquire bare moral knowledge from others, it doesn’t allow us to reflect or cultivate these other moral goods which are central to our moral identity.

Consider the value we place on understanding why we think our moral beliefs are true. Alison Hills notes that pure moral deference can’t get us to such moral understanding. When Bob defers unquestioningly to Sally’s judgment that abortion is morally permissible, he lacks an understanding of why this might be true. Amongst other things, this prevents Bob from being able to articulate, in his own words, the reasons behind this claim. This seems strange enough in itself, and Hills argues for at least two reasons why Bob’s situation is a bad one. For one, Bob’s lack of moral understanding prevents him from acting in a morally worthy way. Bob wouldn’t deserve any moral praise for, say, shutting down someone who harasses women who undergo the procedure.

Moreover, Bob’s lack of moral understanding seems to reflect a lack of good moral character, or virtue. Bob’s belief that ‘late-term abortion is permissible’ isn’t integrated with the rest of his thoughts, motivations, emotions, and decisions. Moral understanding, of course, isn’t all that matters for virtue and character. But philosophers who disagree with Hills on this point, like Robert Howell and Errol Lord, also note that moral deference reflects a lack of virtue and character in other ways, and can prevent the cultivation of these traits.

The article is here.

Tuesday, June 13, 2017

Why It’s So Hard to Admit You’re Wrong

Kristin Wong
The New York Times
Originally published May 22, 2017

Here are two excerpts:

Mistakes can be hard to digest, so sometimes we double down rather than face them. Our confirmation bias kicks in, causing us to seek out evidence to prove what we already believe. The car you cut off has a small dent in its bumper, which obviously means that it is the other driver’s fault.

Psychologists call this cognitive dissonance — the stress we experience when we hold two contradictory thoughts, beliefs, opinions or attitudes.


“Cognitive dissonance is what we feel when the self-concept — I’m smart, I’m kind, I’m convinced this belief is true — is threatened by evidence that we did something that wasn’t smart, that we did something that hurt another person, that the belief isn’t true,” said Carol Tavris, a co-author of the book “Mistakes Were Made (But Not by Me).”

She added that cognitive dissonance threatened our sense of self.

“To reduce dissonance, we have to modify the self-concept or accept the evidence,” Ms. Tavris said. “Guess which route people prefer?”

Or maybe you cope by justifying your mistake. The psychologist Leon Festinger suggested the theory of cognitive dissonance in the 1950s when he studied a small religious group that believed a flying saucer would rescue its members from an apocalypse on Dec. 20, 1954. Publishing his findings in the book “When Prophecy Fails,” he wrote that the group doubled down on its belief and said God had simply decided to spare the members, coping with their own cognitive dissonance by clinging to a justification.

“Dissonance is uncomfortable and we are motivated to reduce it,” Ms. Tavris said.

When we apologize for being wrong, we have to accept this dissonance, and that is unpleasant. On the other hand, research has shown that it can feel good to stick to our guns.

Psychiatry’s “Goldwater Rule” has never met a test like Donald Trump

Brian Resnick
Originally published May 25, 2017

Here is an excerpt:

Some psychiatrists are saying it’s time to rethink this core ethical guideline. The rule, they say, is acting like a gag order, preventing qualified psychiatrists from giving the public important perspective on the mental health of a president whose behavior is out of step with any other president in history.

“The public has a right to medical and psychiatric knowledge about its leaders — at least in a democracy,” Nassir Ghaemi, a Tufts University psychiatrist, recently argued at an APA conference. “Why can’t we have a reasoned scientific discussion on this matter? Why do we just have complete censorship?”

The controversy is sure to rage on, as many psychiatrists stand by the professional precedent. The rule itself has even been expanded recently. But just the existence of the debate is an incredible moment not only in the field of psychiatry but in American politics. It’s not just armchair psychiatrists who are concerned about Trump’s mental health — some of the real ones are even willing to rethink their professional ethics because of it.

The article is here.

Monday, June 12, 2017

New bill requires annual ethics training for lawmakers

Pete Kasperowicz
The Washington Examiner
Originally posted May 26, 2017

Members of the House would have to undergo mandated annual ethics training under a new bill offered by Reps. David Cicilline, D-R.I., and Dave Trott, R-Mich.

The two lawmakers said senators are already taking "ongoing" ethics classes, and House staffers are required to undergo training each year. But House lawmakers themselves are exempt.

"Elected officials should always be held to the highest standards of conduct," Cicilline said Thursday. "That's why it's absurd that members of the U.S. House do not have to complete annual ethics training. We need to close this loophole now."

Trott said his constituents believe lawmakers are above the law, and said his bill would help address that complaint.

"No one is above the law, and members of Congress must live by the laws they create," he said.

The article is here.

Views of US Moral Values Slip to Seven-Year Lows

Originally posted May 22, 2017

Americans' ratings of U.S. moral values, consistently negative through the years, have slipped to their lowest point in seven years. More than four in five (81%) now rate the state of moral values in the U.S. as only fair or poor.

Since Gallup first asked in 2002 whether the nation's moral values were getting better or getting worse, the percentage saying worse has always been well above the majority level, ranging from a low of 64% in November 2004 to a high of 82% in May 2007. Over the past six years, it has stayed within a five-point range, reaching a low of 72% in 2013 and 2015 before climbing to this year's high of 77%.

Gallup's question about the current state of moral values getting better or worse has been asked over the same 16-year span as the question about the overall state of moral values. The combined percentage saying moral values are only fair or poor through the years has generally aligned with views about moral values getting worse.

The article is here.

Sunday, June 11, 2017

Beyond Googling: The Ethics of Using Patients' Electronic Footprints in Psychiatric Practice

Carl Fisher and Paul Appelbaum
Harvard Review of Psychiatry


Electronic communications are an increasingly important part of people's lives, and much information is accessible through such means. Anecdotal clinical reports indicate that mental health professionals are beginning to use information from their patients' electronic activities in treatment and that their data-gathering practices have gone far beyond simply searching for patients online. Both academic and private sector researchers are developing mental health applications to collect patient information for clinical purposes. Professional societies and commentators have provided minimal guidance, however, about best practices for obtaining or using information from electronic communications or other online activities. This article reviews the clinical and ethical issues regarding use of patients' electronic activities, primarily focusing on situations in which patients share information with clinicians voluntarily. We discuss the potential uses of mental health patients' electronic footprints for therapeutic purposes, and consider both the potential benefits and the drawbacks and risks. Whether clinicians decide to use such information in treating any particular patient-and if so, the nature and scope of its use-requires case-by-case analysis. But it is reasonable to assume that clinicians, depending on their circumstances and goals, will encounter circumstances in which patients' electronic activities will be relevant to, and useful in, treatment.

The article is here.

Saturday, June 10, 2017

Feds probing psychiatric hospitals for locking in patients to boost profits

Beth Mole
Ars Technica
Originally published May 24, 2017

At least three US federal agencies are now investigating Universal Health Services over allegations that its psychiatric hospitals keep patients longer than needed in order to milk insurance companies, Buzzfeed News reports.

According to several sources, the UHS' chain of psychiatric facilities—the largest in the country—will delay patients' discharge dates until the day insurance coverage runs out, regardless of the need of the patient. Because the hospitals are reimbursed per day, the practice extracts the maximum amount of money from insurance companies. It also can be devastating to patients, who are needlessly kept from returning to their jobs and families. To cover up the scheme, medical notes are sometimes altered and doctors come up with excuses, such as medication changes, sources allege. Employees say they repeatedly hear the phrase: “don’t leave days on the table.”

The Department of Health and Human Services has been investigating UHS for several years, as Buzzfeed has previously reported. UHS, a $12 billion company, gets a third of its revenue from government insurance providers. In 2013, HHS issued subpoenas to 10 UHS psychiatric hospitals.

But now it seems the Department of Defense and the FBI have also gotten involved.

The article is here.

How Gullible Are We? A Review of the Evidence From Psychology and Social Science.

Hugo Mercier
Review of General Psychology, May 18 , 2017


A long tradition of scholarship, from ancient Greece to Marxism or some contemporary social psychology, portrays humans as strongly gullible—wont to accept harmful messages by being unduly deferent. However, if humans are reasonably well adapted, they should not be strongly gullible: they should be vigilant toward communicated information. Evidence from experimental psychology reveals that humans are equipped with well-functioning mechanisms of epistemic vigilance. They check the plausibility of messages against their background beliefs, calibrate their trust as a function of the source’s competence and benevolence, and critically evaluate arguments offered to them. Even if humans are equipped with well-functioning mechanisms of epistemic vigilance, an adaptive lag might render them gullible in the face of new challenges, from clever marketing to omnipresent propaganda. I review evidence from different cultural domains often taken as proof of strong gullibility: religion, demagoguery, propaganda, political campaigns, advertising, erroneous medical beliefs, and rumors. Converging evidence reveals that communication is much less influential than often believed—that religious proselytizing, propaganda, advertising, and so forth are generally not very effective at changing people’s minds. Beliefs that lead to costly behavior are even less likely to be accepted. Finally, it is also argued that most cases of acceptance of misguided communicated information do not stem from undue deference, but from a fit between the communicated information and the audience’s preexisting beliefs.

The article is here.

Friday, June 9, 2017

Sapolsky on the biology of human evil

Sean Illing
Originally posted May 23, 2017

Here is an excerpt:

The key question of the book — why are we the way we are? — is explored from a multitude of angles, and the narrative structure helps guide the reader. For instance, Sapolsky begins by examining a person’s behavior in the moment (why we recoil or rejoice or respond aggressively to immediate stimuli) and then zooms backward in time, following the chain of antecedent causes back to our evolutionary roots.

For every action, Sapolsky shows, there are several layers of causal significance: There’s a neurobiological cause and a hormonal cause and a chemical cause and a genetic cause, and, of course, there are always environmental and historical factors. He synthesizes the research across these disciplines into a coherent, readable whole.

In this interview, I talk with Sapolsky about the paradoxes of human nature, why we’re capable of both good and evil, whether free will exists, and why symbols have become so central to human life.

The article and interview are here.

Are practitioners becoming more ethical?

By Rebecca Clay
The Monitor on Psychology
May 2017, Vol 48, No. 5
Print version: page 50

The results of research presented at APA's 2016 Annual Convention suggest that today's practitioners are less likely to commit such ethical violations as kissing a client, altering diagnoses to meet insurance criteria and treating homosexuality as pathological than their counterparts 30 years ago.

The research, conducted by psychologists Rebecca Schwartz-Mette, PhD, of the University of Maine at Orono and David S. Shen-Miller, PhD, of Bastyr University, replicated a 1987 study by Kenneth Pope, PhD, and colleagues published in the American Psychologist. Schwartz-Mette and Shen-Miller asked 453 practicing psychologists the same 83 questions posed to practitioners three decades ago.

The items included clear ethical violations, such as having sex with a client or supervisee. But they also included behaviors that could reasonably be construed as ethical, such as breaking confidentiality to report child abuse; behaviors that are ambiguous or not specifically prohibited, such as lending money to a client; and even some that don't seem controversial, such as shaking hands with a client. "Interestingly, 75 percent of the items from the Pope study were rated as less ethical in our study, suggesting a more general trend toward conservativism in multiple areas," says Schwartz-Mette.

The article is here.

Thursday, June 8, 2017

Shining Light on Conflicts of Interest

Craig Klugman
The American Journal of Bioethics 
Volume 17, 2017 - Issue 6

Chimonas, DeVito and Rothman (2017) offer a descriptive target article that examines physicians' knowledge of and reaction to the Sunshine Act's Open Payments Database. This program is a federal computer repository of all payments and goods with a worth over $10 made from pharmaceutical companies and device manufacturers to physicians. Created under the 2010 Affordable Care Act, the goal of this database is to make the relationships between physicians and the medical drug/device industry more transparent. Such transparency is often touted as a solution to financial conflicts of interest (COI). A COI occurs when a person owes featly to more than one party. For example, physicians have fiduciary duties toward patients. At the same time, when physicians receive gifts or benefits from a pharmaceutical company, they are more likely to prescribe that company's products (Spurling et al. 2010). The gift creates a sense of a moral obligation toward the company. These two interests can be (but may not be) in conflict. Such arrangements can undermine a patient's trust with his/her physician, and more broadly, the public's trust of medicine.


The idea is that if people are told about the conflict, then they can judge for themselves whether the provider is compromised and whether they wish to receive care from this person. The database exists with this intent—that transparency alone is enough. What is a patient to do with this information? Should patients avoid physicians who have conflicts? The decision is left in the patient's hands. Back in 2014, the Pharmaceutical Research and Manufacturers of America lobbying group expressed concern that the public would not understand the context of any payments or gifts to physicians (Castellani 2014).

The article is here.

The AI Cargo Cult: The Myth of Superhuman AI

Kevin Kelly
Originally posted April 25, 2017

Here is an excerpt:

The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. Most technical people tend to graph intelligence the way Nick Bostrom does in his book, Superintelligence — as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of, say, a small animal; at the other end is the high intelligence, of, say, a genius—almost as if intelligence were a sound level in decibels. Of course, it is then very easy to imagine the extension so that the loudness of intelligence continues to grow, eventually to exceed our own high intelligence and become a super-loud intelligence — a roar! — way beyond us, and maybe even off the chart.

This model is topologically equivalent to a ladder, so that each rung of intelligence is a step higher than the one before. Inferior animals are situated on lower rungs below us, while higher-level intelligence AIs will inevitably overstep us onto higher rungs. Time scales of when it happens are not important; what is important is the ranking—the metric of increasing intelligence.

The problem with this model is that it is mythical, like the ladder of evolution. The pre-Darwinian view of the natural world supposed a ladder of being, with inferior animals residing on rungs below human. Even post-Darwin, a very common notion is the “ladder” of evolution, with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it. So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view.

The article is here.

Wednesday, June 7, 2017

What do White House Rules Mean if They Can Be Circumvented?

Sheelah Kolhatkar
The New Yorker
Originally posted June 6, 2017

Here is an excerpt:

Each Administration establishes its own ethics rules, often by executive order, which go beyond ethics laws codified by Congress (those laws require such things as financial-disclosure forms from government employees, the divestiture of assets if they pose conflicts, and recusal from government matters if they intersect with personal business). While the rules established by law are hard and fast, officials can be granted waivers from the looser executive-order rules. The Obama Administration granted a handful of such waivers over the course of its eight years. What’s startling with the Trump White House is just how many waivers have been issued so early in Trump’s term—more than a dozen were disclosed last week, with another twenty-four expected this week, according to a report in the Wall Street Journal—as well as the Administration’s attempt to keep them secret, all while seeming to flout the laws that dictate how the whole system should work.

The ethics waivers made public last week apply to numerous officials who are now working on matters affecting the same companies and industries they represented before joining the Administration. The documents were only released after the Office of Government Ethics pressed the Trump Administration to make them public, which is how they have been handled in the past; the White House initially refused, attempting to argue that the ethics office lacked the standing to even ask for them. After a struggle, the Administration relented, but many of the waivers it released were missing critical information, such as the dates when they were issued. One waiver in particular, which appears to apply to Trump’s chief strategist, Stephen Bannon, without specifically naming him, grants Administration staff permission to communicate with news organizations where they might have formerly worked (Breitbart News, in Bannon’s case). The Bannon-oriented waiver, issued by the “Counsel to the President,” contains the line “I am issuing this memorandum retroactive to January 20, 2017.”

Walter Shaub, the head of the Office of Government Ethics, quickly responded that there is no such thing as a “retroactive” ethics waiver. Shaub told the Times, “If you need a retroactive waiver, you have violated a rule.”

The article is here.

On the cognitive (neuro)science of moral cognition: utilitarianism, deontology and the ‘fragmentation of value’

Alejandro Rosas
Working Paper: May 2017


Scientific explanations of human higher capacities, traditionally denied to other animals, attract the attention of both philosophers and other workers in the humanities. They are often viewed with suspicion and skepticism. In this paper I critically examine the dual-process theory of moral judgment proposed by Greene and collaborators and the normative consequences drawn from that theory. I believe normative consequences are warranted, in principle, but I propose an alternative dual-process model of moral cognition that leads to a different normative consequence, which I dub ‘the fragmentation of value’. In the alternative model, the neat overlap between the deontological/utilitarian divide and the intuitive/reflective divide is abandoned. Instead, we have both utilitarian and deontological intuitions, equally fundamental and partially in tension. Cognitive control is sometimes engaged during a conflict between intuitions. When it is engaged, the result of control is not always utilitarian; sometimes it is deontological. I describe in some detail how this version is consistent with evidence reported by many studies, and what could be done to find more evidence to support it.

The working paper is here.

Tuesday, June 6, 2017

Some Social Scientists Are Tired of Asking for Permission

Kate Murphy
The New York Times
Originally published May 22, 2017

Who gets to decide whether the experimental protocol — what subjects are asked to do and disclose — is appropriate and ethical? That question has been roiling the academic community since the Department of Health and Human Services’s Office for Human Research Protections revised its rules in January.

The revision exempts from oversight studies involving “benign behavioral interventions.” This was welcome news to economists, psychologists and sociologists who have long complained that they need not receive as much scrutiny as, say, a medical researcher.

The change received little notice until a March opinion article in The Chronicle of Higher Education went viral. The authors of the article, a professor of human development and a professor of psychology, interpreted the revision as a license to conduct research without submitting it for approval by an institutional review board.

That is, social science researchers ought to be able to decide on their own whether or not their studies are harmful to human subjects.

The Federal Policy for the Protection of Human Subjects (known as the Common Rule) was published in 1991 after a long history of exploitation of human subjects in federally funded research — notably, the Tuskegee syphilis study and a series of radiation experiments that took place over three decades after World War II.

The remedial policy mandated that all institutions, academic or otherwise, establish a review board to ensure that federally funded researchers conducted ethical studies.

The article is here.

Research and clinical issues in trauma and dissociation: Ethical and logical fallacies, myths, misreports, and misrepresentations

Jenny Ann Rydberg
European Journal of Trauma & Dissociation
Available online 23 April 2017


The creation of a new journal on trauma and dissociation is an opportunity to take stock of existing models and theories in order to distinguish mythical, and sometimes dangerous, stories from established facts.


To describe the professional, scientific, clinical, and ethical strategies and fallacies that must be envisaged when considering reports, claims, and recommendations relevant to trauma and dissociation.


After a general overview, two current debates in the field, the stabilisation controversy and the false/recovered memory controversy, are examined in detail to illustrate such issues.


Misrepresentations, misreports, ethical and logical fallacies are frequent in the general and scientific literature regarding the stabilisation and false/recovered memory controversies.


A call is made for researchers and clinicians to strengthen their knowledge of and ability to identify such cognitive, logical, and ethical manoeuvres both in scientific literature and general media reports.

The article is here.

Monday, June 5, 2017

AI May Hold the Key to Stopping Suicide

Bahar Gholipour
NBC News
Originally posted May 23, 2017

Here is an excerpt:

So far the results are promising. Using AI, Ribeiro and her colleagues were able to predict whether someone would attempt suicide within the next two years at about 80 percent accuracy, and within the next week at 92 percent accuracy. Their findings were recently reported in the journal Clinical Psychological Science.

This high level of accuracy was possible because of machine learning, as researchers trained an algorithm by feeding it anonymous health records from 3,200 people who had attempted suicide. The algorithm learns patterns through examining combinations of factors that lead to suicide, from medication use to the number of ER visits over many years. Bizarre factors may pop up as related to suicide, such as acetaminophen use a year prior to an attempt, but that doesn't mean taking acetaminophen can be isolated as a risk factor for suicide.

"As humans, we want to understand what to look for," Ribeiro says. "But this is like asking what's the most important brush stroke in a painting."

With funding from the Department of Defense, Ribeiro aims to create a tool that can be used in clinics and emergency rooms to better find and help high-risk individuals.

The article is here.

Can Psychologists Tell Us Anything About Morality?

John M. Doris, Edouard Machery and Stephen Stich
Philosopher's Magazine
Originally published May 10, 2017

Here is an excerpt:

Some psychologists accept morally dubious employment. Some psychologists cheat. Some psychology experiments don't replicate. Some. But the inference from some to all is at best invalid, and at worst, invective. There's good psychology and bad psychology, just like there's good and bad everything else, and tarring the entire discipline with the broadest of brushes won’t help us sort that out. It is no more illuminating to disregard the work of psychologists en masse on the grounds that a tiny minority of the American Psychological Association, a very large and diverse professional association, were involved with the Bush administration’s program of torture than it would to disregard the writings of all Nietzsche scholars because some Nazis were Nietzsche enthusiasts! To be sure, there are serious questions about which intellectual disciplines, and which intellectuals, are accorded cultural capital, and why. But we are unlikely to find serious answers by means of innuendo and polemic.

Could there be more substantive reasons to exclude scientific psychology from the study of ethics? The most serious – if ultimately unsuccessful – objection proceeds in the language of “normativity”. For philosophers, normative statements are prescriptive, or “oughty”: in contrast to descriptive statements, which aspire only to say how the world is, normative statements say what ought be done about it. And, some have argued, never the twain shall meet.

While philosophers haven’t enjoyed enviable success in adducing lawlike generalisations, one such achievement is Hume’s Law (we told you the issues are old ones), which prohibits deriving normative statements from descriptive statements. As the slogan goes, “is doesn’t imply ought.”

Many philosophers, ourselves included, suppose that Hume is on to something. There probably exists some sort of “inferential barrier” between the is and the ought, such that there are no strict logical entailments from the descriptive to the normative.

The article is here.

Sunday, June 4, 2017

Physicians, Firearms, and Free Speech

Wendy E. Parmet, Jason A. Smith, and Matthew Miller
N Engl J Med 2017; 376:1901-1903
May 18, 2017

Here is an excerpt:

The majority’s well-reasoned decision, in fact, does just that. By relying on heightened rather than strict scrutiny, the majority affirmed that laws regulating physician speech must be designed to enhance rather than harm patient safety. The majority took this mandate seriously and required the state to show some meaningful evidence that the regulation was apt to serve the state’s interest in protecting patients.

The state could not do so for two reasons. First, the decision to keep a gun in the home substantially increases the risk of death for all household members, especially the risk of death by suicide, and particularly so when guns are stored loaded and unlocked, as they are in millions of homes where children live.  Second, the majority of U.S. adults who live in homes with guns are unaware of the heightened risk posed by bringing guns into a home.  Indeed, by providing accurate information about the risks created by easy access to firearms, as well as ways to modify that risk (e.g., by storing guns unloaded and locked up, separate from ammunition), a physician’s counseling can not only enhance a patient’s capacity for self-determination, but also save lives.

Given the right to provide such counsel, professional norms recognize the responsibility to do so. Fulfilling this obligation, however, may not be easy, since the chief impediments to doing so — and to doing so effectively — are not and never have been legal barriers. Indeed, the court’s welcome ruling does not ensure that most clinicians will honor this hard-won victory by exercising their First Amendment rights.

The article is here.

Saturday, June 3, 2017

Trump Exempts Entire Senior Staff From White House Ethics Rules

Lachlan Markay
The Daily Beast
Originally published May 31, 2017

Here is an excerpt:

Andrew Olmem, another White House economist and a former lobbyist for a host of large financial services and insurance firms, will be free to work with former clients on specific issues affecting bank capital requirements, financial regulation of insurers, and the Puerto Rican debt crisis, all issues on which he has recently lobbied.

Those officials have been given freer rein to advance their former clients’ financial interests, but ethics rules have also been waived for every other “commissioned officer”—staffers who report directly to the president—in the White House who has worked for a political group in the past two years.

That will allow a number of White House staffers to collaborate with pro-Trump advocacy operations. The West Wing is stacked with officials who have made significant sums working, consulting for, or representing high-profile political organizations, including networks of groups financed by the Trump-backing Mercer family and the libertarian Koch family.

Conway herself consulted for more than 50 political, policy, and advocacy organizations last year, according to a White House financial disclosure statement.

The article is here.

Friday, June 2, 2017

The meaning of life in a world without work

Yuval Noah Harari
The Guardian
Originally posted May 8, 2017

Most jobs that exist today might disappear within decades. As artificial intelligence outperforms humans in more and more tasks, it will replace humans in more and more jobs. Many new professions are likely to appear: virtual-world designers, for example. But such professions will probably require more creativity and flexibility, and it is unclear whether 40-year-old unemployed taxi drivers or insurance agents will be able to reinvent themselves as virtual-world designers (try to imagine a virtual world created by an insurance agent!). And even if the ex-insurance agent somehow makes the transition into a virtual-world designer, the pace of progress is such that within another decade he might have to reinvent himself yet again.

The crucial problem isn’t creating new jobs. The crucial problem is creating new jobs that humans perform better than algorithms. Consequently, by 2050 a new class of people might emerge – the useless class. People who are not just unemployed, but unemployable.

The article is here.

The Theory of Dyadic Morality: Reinventing Moral Judgment by Redefining Harm

Chelsea Schein, Kurt Gray
Personality and Social Psychology Review 
First Published May 14, 2017


The nature of harm—and therefore moral judgment—may be misunderstood. Rather than an objective matter of reason, we argue that harm should be redefined as an intuitively perceived continuum. This redefinition provides a new understanding of moral content and mechanism—the constructionist Theory of Dyadic Morality (TDM). TDM suggests that acts are condemned proportional to three elements: norm violations, negative affect, and—importantly—perceived harm. This harm is dyadic, involving an intentional agent causing damage to a vulnerable patient (A→P). TDM predicts causal links both from harm to immorality (dyadic comparison) and from immorality to harm (dyadic completion). Together, these two processes make the “dyadic loop,” explaining moral acquisition and polarization. TDM argues against intuitive harmless wrongs and modular “foundations,” but embraces moral pluralism through varieties of values and the flexibility of perceived harm. Dyadic morality impacts understandings of moral character, moral emotion, and political/cultural differences, and provides research guidelines for moral psychology.

The article is here.

Thursday, June 1, 2017

Nudges in a post-truth world

Neil Levy
Journal of Medical Ethics 
Published Online First: 19 May 2017


Nudges—policy proposals informed by work in behavioural economics and psychology that are designed to lead to better decision-making or better behaviour—are controversial. Critics allege that they bypass our deliberative capacities, thereby undermining autonomy and responsible agency. In this paper, I identify a kind of nudge I call a nudge to reason, which make us more responsive to genuine evidence. I argue that at least some nudges to reason do not bypass our deliberative capacities. Instead, use of these nudges should be seen as appeals to mechanisms partially constitutive of these capacities, and therefore as benign (so far as autonomy and responsible agency are concerned). I sketch some concrete proposals for nudges to reason which are especially important given the apparent widespread resistance to evidence seen in recent political events.

The article is here.

There is no liberal right to sex with students

Maya J. Goldenberg, Karen Houle, Monique Deveaux, Karyn L. Freedman, & Patricia Sheridan
The Times Higher Education
Originally posted May 4, 2017

There is a long and distinguished history of conceptualising liberal democracy in terms of basic rights to which, all other things being equal, everyone is entitled. Sexual freedom is rightly counted among these. But should this right apply where one person is in a position of power and authority over the other? Doctors are sanctioned if they have sex with their patients, as are lawyers who sleep with their clients. Should sexual relationships between professors and students in the same department also be off limits?

Neil McArthur thinks not. As Times Higher Education has reported, the associate professor of philosophy at the University of Manitoba, in Canada, recently published a paper criticising the spread of bans on such relationships. But we believe that his argument is flawed.

The article is here.

Wednesday, May 31, 2017

4 questions for Paul Bloom

By Lea Winerman
May 2017, Vol 48, No. 5
Print version: page 27

Here is an excerpt:

Why do you believe this kind of empathy is overrated?

I should be clear that I'm not against empathy in general. I think it's a great source of pleasure, for instance, and it plays some role in intimate ­relationships. But when it comes to moral judgments, empathy makes a very poor guide.

One reason is that it's biased. You naturally empathize with people who in some way are part of your circle, who look like you, who maybe share your ethnicity. So, for example, if you base your charitable giving choices on empathy, you find yourself inevitably giving to people who [are like you], and ignoring the plight of thousands, maybe millions of others.

Another problem is that empathy is innumerate. It's a spotlight—you zoom in on one person, as opposed to many. Some people think that this is one of its advantages. But real-world moral decisions involve coping with numbers. They often involve a recognition, for instance, that helping just one person can make lives worse for hundreds or thousands of others. The innumeracy of empathy often leads to paradoxical situations where we're desperate to help a single person—or even a cute puppy—while ignoring crises like climate change, because although millions of people will be affected by it, there's no identifiable victim to zoom in on.

A third problem is that empathy can be weaponized. So, unscrupulous politicians use our empathy for victims of certain crimes to motivate anger and hatred toward other, marginalized, groups. We saw a lot of that in the last election season.

The article is here.

More CEOs Are Getting Fired After an Ethical Lapse, Study Finds

Vanessa Fuhrmans
The Wall Street Journal
Originally posted May 14, 2017

Ethical breaches are causing more chief executives to lose their jobs. The upside? Researchers say the rising numbers don’t point to more corporate misbehavior: It’s that CEOs are being held to a higher level of accountability.

Among the myriad reasons corporate bosses leave their jobs, firings have been on the decline. In a study of CEO exits at the world’s 2,500 largest public companies, researchers at PricewaterhouseCoopers LLP’s strategy consulting arm, called Strategy&, found 20% of CEO exits in the past five years were forced, down from 31% of CEO exits in the previous five years.

But CEO ousters due to ethical lapses—either their own improper conduct, or their employees’—are climbing. Such forced exits rose to 5.3% of CEO departures in the 2012-to-2016 period, up from 3.9% during the previous five years.

The article is here.

Tuesday, May 30, 2017

There’s a Right Way and a Wrong Way to Do Empathy

By Sarah Watts
The Science of Us
Originally published May 18, 2017

Here is an excerpt:

When we talk about empathy, we tend to talk about it as an unqualified good thing. Research has shown that empathy is associated with kindness and helping behaviors, while its absence, clinically referred to as psychopathy, is associated with manipulation and criminal deviance. Empathy, some scientists have concluded, allows us to function well with others and survive as a species.

But what people often don’t talk about is how even a good thing like empathy can still be emotionally draining. Empathic people who easily take on other people’s feelings can spend their days feeling overwhelmed, hurt, and heavyhearted. Empathy, in other words, can be downright stressful. So would it be fair to say that sometimes it’s unhealthy?

A paper published earlier this month in the Journal of Experimental Psychology set out to answer exactly that. According to the authors, there are “two routes” to empathy. The first is imagining how someone else might feel in a given circumstance, called “imagine-other-perspective-taking,” or IOPT. The second is actually imagining yourself in the other person’s situation, called “imagine-self-perspective-taking,” or ISPT. With IOPT, you acknowledge another person’s feelings; with ISPT, you take on that person’s feelings as your own.

The article is here.

Game Theory and Morality

Moshe Hoffman , Erez Yoeli , and Carlos David Navarrete
The Evolution of Morality
Part of the series Evolutionary Psychology pp 289-316

Here is an excerpt:

The key result for evolutionary dynamic models is that, except under extreme conditions, behavior converges to Nash equilibria. This result rests on one simple, noncontroversial assumption shared by all evolutionary dynamics: Behaviors that are relatively successful will increase in frequency. Based on this logic, game theory models have been fruitfully applied in biological contexts to explain phenomena such as animal sex ratios (Fisher, 1958), territoriality (Smith & Price, 1973), cooperation (Trivers, 1971), sexual displays (Zahavi, 1975), and parent–offspring conflict (Trivers, 1974). More recently, evolutionary dynamic models have been applied in human contexts where conscious deliberation is believed to not play an important role, such as in the adoption of religious rituals (Sosis & Alcorta, 2003 ), in the expression and experience of emotion (Frank, 1988 ; Winter, 2014), and in the use of indirect speech (Pinker, Nowak, & Lee, 2008).

 Crucially for this chapter, because our behaviors are mediated by moral intuitions and ideologies, if our moral behaviors converge to Nash, so must the intuitions and ideologies that motivate them. The resulting intuitions and ideologies will bear the signature of their game theoretic origins, and this signature will lend clarity on the puzzling, counterintuitive, and otherwise hard-to-explain features of our moral intuitions, as exemplified by our motivating examples.

In order for game theory to be relevant to understanding our moral intuitions and ideologies, we need only the following simple assumption: Moral intuitions and ideologies that lead to higher payoffs become more frequent. This assumption can be met if moral intuitions that yield higher payoffs are held more tenaciously, are more likely to be imitated, or are genetically encoded. For example, if every time you transgress by commission you are punished, but every time you transgress by omission you are not, you will start to intuit that commission is worse than omission.

The book chapter is here.

Monday, May 29, 2017

Moral Hindsight

Nadine Fleischhut, Björn Meder, & Gerd Gigerenzer
Experimental Psychology (2017), 64, pp. 110-123.


How are judgments in moral dilemmas affected by uncertainty, as opposed to certainty? We tested the predictions of a consequentialist and deontological account using a hindsight paradigm. The key result is a hindsight effect in moral judgment. Participants in foresight, for whom the occurrence of negative side effects was uncertain, judged actions to be morally more permissible than participants in hindsight, who knew that negative side effects occurred. Conversely, when hindsight participants knew that no negative side effects occurred, they judged actions to be more permissible than participants in foresight. The second finding was a classical hindsight effect in probability estimates and a systematic relation between moral judgments and probability estimates. Importantly, while the hindsight effect in probability estimates was always present, a corresponding hindsight effect in moral judgments was only observed among “consequentialist” participants who indicated a cost-benefit trade-off as most important for their moral evaluation.

The article is here.

Sunday, May 28, 2017

CRISPR Makes it Clear: US Needs a Biology Strategy, FAST

Amy Webb
Originally published

Here is an excerpt:

Crispr can be used to engineer agricultural products like wheat, rice, and animals to withstand the effects of climate change. Seeds can be engineered to produce far greater yields in tiny spaces, while animals can be edited to create triple their usual muscle mass. This could dramatically change global agricultural trade and cause widespread geopolitical destabilization. Or, with advance planning, this technology could help the US forge new alliances.

How comfortable do you feel knowing that there is no group coordinating a national biology strategy in the US, and that a single for-profit company holds a critical mass of intellectual property rights to the future of genomic editing?

While I admire Zheng’s undeniable smarts and creativity, for-profit companies don’t have a mandate to balance the tension between commercial interests and what’s good for humanity; there is no mechanism to ensure that they’ll put our longer-term best interests first.

The article is here.

Saturday, May 27, 2017

Why Do So Many Incompetent Men Become Leaders?

Tomas Chamorro-Premuzic
Harvard Business Review
Originally published August 22, 2013

There are three popular explanations for the clear under-representation of women in management, namely: (1) they are not capable; (2) they are not interested; (3) they are both interested and capable but unable to break the glass-ceiling: an invisible career barrier, based on prejudiced stereotypes, that prevents women from accessing the ranks of power. Conservatives and chauvinists tend to endorse the first; liberals and feminists prefer the third; and those somewhere in the middle are usually drawn to the second. But what if they all missed the big picture?

In my view, the main reason for the uneven management sex ratio is our inability to discern between confidence and competence. That is, because we (people in general) commonly misinterpret displays of confidence as a sign of competence, we are fooled into believing that men are better leaders than women. In other words, when it comes to leadership, the only advantage that men have over women (e.g., from Argentina to Norway and the USA to Japan) is the fact that manifestations of hubris — often masked as charisma or charm — are commonly mistaken for leadership potential, and that these occur much more frequently in men than in women.

The article is here.

Friday, May 26, 2017

What is moral injury in veterans?

Holly Arrow and William Schumacher
The Conversation
Originally posted May 21, 2017

Here is an excerpt:

The moral conflict created by the violations of “what’s right” generates moral injury when the inability to reconcile wartime actions with a personal moral code creates lasting psychological consequences.

Psychiatrist Jonathan Shay, in his work with Vietnam veterans, defined moral injury as the psychological, social and physiological results of a betrayal of “what’s right” by an authority in a high-stakes situation. In “Achilles In Vietnam,” a book that examines the psychological devastation of war, a Vietnam veteran described a situation in which his commanding officers used tear gas on a village after the veteran and his unit had their gas masks rendered ineffective due to water damage. The veteran stated, “They gassed us almost to death.” This type of “friendly fire” incident is morally wounding in a way that attacks by an enemy are not.

Psychologist Brett Litz and his colleagues expanded this to include self-betrayal and identified “perpetrating, failing to prevent, bearing witness to, or learning about acts that transgress deeply held moral beliefs and expectations” as the cause of moral injury.

Guilt and moral injury

A research study published in 1991 identified combat-related guilt as the best predictor of suicide attempts among a sample of Vietnam veterans with PTSD. Details of the veterans’ experiences connected that guilt to morally injurious events.

The article is here.

Do the Right Thing: Preferences for Moral Behavior, Rather than Equity or Efficiency Per Se, Drive Human Prosociality

Capraro, Valerio and Rand, David G.
(May 8, 2017).


Decades of experimental research have shown that some people forgo personal gains to benefit others in unilateral one-shot anonymous interactions. To explain these results, behavioral economists typically assume that people have social preferences for minimizing inequality and/or maximizing efficiency (social welfare). Here we present data that are fundamentally incompatible with these standard social preference models. We introduce the “Trade-Off Game” (TOG), where players unilaterally choose between an equitable option and an efficient option. We show that simply changing the labeling of the options to describe the equitable versus efficient option as morally right completely reverses people’s behavior in the TOG. Moreover, people who take the positively framed action, be it equitable or efficient, are more prosocial in a separate Dictator Game (DG) and Prisoner’s Dilemma (PD). Rather than preferences for equity and/or efficiency per se, we propose a generalized morality preference that motivates people to do what they think is morally right. When one option is clearly selfish and the other pro-social (e.g. equitable and/or efficient), as in the DG and PD, the economic outcomes are enough to determine what is morally right. When one option is not clearly more prosocial than the other, as in the TOG, framing resolves the ambiguity about which choice is moral. In addition to explaining our data, this account organizes prior findings that framing impacts cooperation in the standard simultaneous PD, but not in the asynchronous PD or the DG. Thus we present a new framework for understanding the basis of human prosociality.

The paper is here.

Thursday, May 25, 2017

In a moral dilemma, choose the one you love: Impartial actors are seen as less moral than partial ones

Jamie S. Hughes
British Journal of Social Psychology


Although impartiality and concern for the greater good are lauded by utilitarian philosophies, it was predicted that when values conflict, those who acted impartially rather than partially would be viewed as less moral. Across four studies, using life-or-death scenarios and more mundane ones, support for the idea that relationship obligations are important in moral attribution was found. In Studies 1–3, participants rated an impartial actor as less morally good and his or her action as less moral compared to a partial actor. Experimental and correlational evidence showed the effect was driven by inferences about an actor's capacity for empathy and compassion. In Study 4, the relationship obligation hypothesis was refined. The data suggested that violations of relationship obligations are perceived as moral as long as strong alternative justifications sanction them. Discussion centres on the importance of relationships in understanding moral attributions.

The article is here.

Emerging technologies: Ethics and morality

Elfren Cruz
The Philippine Star Global
Originally published May 7, 2017

Here is an excerpt:

These emerging technologies will decide the future of humanity because they can be used by the elite class or populists for good or evil. There is no doubt that there will be immense benefits from these new forms of technology. The main issue has been termed as “distributive justice” by some thinkers. This refers to the determination of access to the benefits of technological change.

There are those who believe that the benefits of emerging technologies will worsen the plight of the poor. The World Bank and the International Labor Organization have already warned that millions of jobs will be wiped out by new technologies. As new labor devices are invented, the power of capitalists will grow and the power of labor will diminish. The number of billionaires will increase while the gap between the rich and the poor will continue to widen. Stephen Hawking, the world’s most famous scientist, has even said that artificial intelligence could lead to the extinction of humanity.

By contrast, the optimists believe that emerging technologies, if properly used, could eliminate poverty and abolish suffering. Stuart Russell of UC Berkley said: “Everything we have of value as human beings, as civilization is the result of intelligence and what artificial intelligence ( AI) could do is essentially be a power tool that magnifies human intelligence and gives us the ability to move our civilization forward in all kinds of ways. It might be curing disease, it might be eliminating poverty. I think it certainly should be preventing environmental catastrophe. AI could be instrumental to all those things.

The article is here.

Wednesday, May 24, 2017

Roger Penrose On Why Consciousness Does Not Compute

Steve Paulson
Originally posted May 4, 2017

Here is an excerpt:

As we probed the deeper implications of Penrose’s theory about consciousness, it wasn’t always clear where to draw the line between the scientific and philosophical dimensions of his thinking. Consider, for example, superposition in quantum theory. How could Schrödinger’s cat be both dead and alive before we open the box? “An element of proto-consciousness takes place whenever a decision is made in the universe,” he said. “I’m not talking about the brain. I’m talking about an object which is put into a superposition of two places. Say it’s a speck of dust that you put into two locations at once. Now, in a small fraction of a second, it will become one or the other. Which does it become? Well, that’s a choice. Is it a choice made by the universe? Does the speck of dust make this choice? Maybe it’s a free choice. I have no idea.”

I wondered if Penrose’s theory has any bearing on the long-running philosophical argument between free will and determinism. Many neuroscientists believe decisions are caused by neural processes that aren’t ruled by conscious thought, rendering the whole idea of free will obsolete. But the indeterminacy that’s intrinsic to quantum theory would suggest that causal connections break down in the conscious brain. Is Penrose making the case for free will?

“Not quite, though at this stage, it looks like it,” he said. “It does look like these choices would be random. But free will, is that random?” Like much of his thinking, there’s a “yes, but” here. His claims are provocative, but they’re often provisional. And so it is with his ideas about free will. “I’ve certainly grown up thinking the universe is deterministic. Then I evolved into saying, ‘Well, maybe it’s deterministic but it’s not computable.’ But is it something more subtle than that? Is it several layers deeper? If it’s something we use for our conscious understanding, it’s going to be a lot deeper than even straightforward, non-computable deterministic physics. It’s a kind of delicate borderline between completely deterministic behavior and something which is completely free.”

Ethics office rejects White House attempt to halt inquiry into lobbyists

Associated Press
Originally posted May 23, 2017

Donald Trump’s administration says the government ethics office lacks the authority to force the president to reveal how many waivers he’s granted to ex-lobbyists in his new administration.

Trump’s budget director, Mick Mulvaney, is asking that the office of government ethics (OGE) director, Walter Shaub, halt his inquiry into lobbyists-turned-Trump administration employees. Mulvaney wrote in a letter last week to Shaub: “This data call appears to raise legal questions regarding the scope of OGE’s authorities.”

Shaub fired back Monday that OGE’s request was well within bounds. The ethics director says he expects to see the waiver information within 10 days.

The article is here.

Tuesday, May 23, 2017

Trump moves to block ethics inquiry centered on ex-lobbyists

Brandon Carter
The Hill
Originally published May 22, 2017

The White House is looking to block an effort from the government’s top ethics office to disclose the names of former lobbyists who have been granted waivers to work in the federal government, according to a new report.

The New York Times reports that the White House sent a letter to the head of the Office of Government Ethics (OGE) challenging its legal authority to request that information.

“It is an extraordinary thing,” Walter Shaub Jr., the director of the ethics office, told the Times. “I have never seen anything like it.”

The letter sent by Mick Mulvaney, the head of the Office of Management and Budget, questions whether the ethics office has the authority to demand information regarding ex-lobbyists who are currently working in the federal government.

The article is here.

Psychologist contractors say they were following agency orders

Pamela MacLean
Bloomberg News
Originally posted May 5, 2017

A pair of U.S. psychologists accused of overseeing the torture of terrorism detainees more than a decade ago face reluctance from a federal judge to let them question the CIA’s deputy director to show they were only following orders.

The judge indicated at a hearing Friday that the psychologists should be able defend themselves in the 2015 lawsuit without compromising government secrecy around the exact role Gina Haspel played in the agency’s overseas interrogation program years before she was tapped to be second in command by the Trump administration.

The American Civil Liberties Union, which filed the case on behalf of three ex-prisoners, one of whom died in custody, is urging the judge not to let the psychologists’ lawyers question Haspel and a retired Central Intelligence Agency official. While the defendants want to demonstrate their actions were approved by the agency, the ACLU says that won’t shield them from liability.

The article is here.

Monday, May 22, 2017

Half of US physicians receive industry payments

Michael McCarthy
BMJ 2017; 357

Nearly half of US physicians receive payments from the drug, medical device, and related medical industries, and surgeons and male physicians are more likely to do so, a US study has found.

The study leader, Jona A Hattangadi-Gluth, of the University of California, San Diego, based in La Jolla, said that most payments were relatively small but that many specialists receive more than $10 000 (£7750; $9160) a year from industry, including 11% of orthopedic surgeons, 12% of neurologists, and 13% of neurosurgeons.

She said, “The data suggest that these payments are much more pervasive than we thought and [that] there is much more money going directly to physicians than maybe people recognized.”

The researchers analyzed data from 2015 collected from Open Payments, a program created by the 2010 Affordable Care Act that requires biomedical manufacturers and group purchasing organizations to report all general payments, ownership interests, and research payments paid to allopathic and osteopathic physicians in the US.

The article is here.

The morality of technology

Rahul Matthan
Live Mint
Originally published May 3, 2017

Here is an excerpt:

Another example of the two sides of technology is drones—a modern technology that is already being deployed widely—from the delivery of groceries to ensuring that life saving equipment reaches first responders in high density urban areas. But for every beneficent use of drone tech, there are an equal number of dubious uses that challenge our ethical boundaries. Foremost among these is development of AI-powered killer drones—autonomous flying weapons intelligent enough to accurately distinguish between friend and foe and then, autonomously, take the decision to execute a kill.

This duality is inherent in all of tech. But just because technology can be used for evil, that should not, of itself, be a reason not to use it. We need new technology to better ourselves and the world we live in—and we need to be wise about how we apply it so that our use remains consistent with the basic morality inherent in modern society. This implies that each time we make a technological breakthrough we must assess afresh, the contexts within which they could present themselves and the uses to which they should (and should not) be put. If required, we must take the trouble to re-draw our moral boundaries, establishing the limits within which they must be constrained.

The article is here.

Sunday, May 21, 2017

What do we evaluate when we evaluate moral character?

Erik G. Helzer & Clayton R. Critcher


Despite growing interest in the topic of moral character, there is very little precision
and a lack of agreement among researchers as to what is evaluated when people evaluate
character. In this chapter we define moral character in novel social cognitive terms and offer
empirical support for the idea that the central qualities of moral character are those deemed
essential for social relationships.

Here is an excerpt:

We approach this chapter from the theoretical standpoint that the centrality of character
evaluation is due to its function in social life. Evaluation of character is, we think, inherently a
judgment about a person’s qualifications for being a solid long-term social investment. That is,
people attempt to suss out moral character because they want to know whether a particular agent
is the type of person who likely possesses the necessary (even if not sufficient) qualities they
expect in a social relationship. In developing these ideas theoretically and empirically, we
consider what form moral character takes, discuss what this proposal suggests about how people
may and do assess others’ moral character, and identify an assortment of qualities that our
perspective predicts will be central to moral character.

The book chapter is here.