Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, October 31, 2019

Scientists 'may have crossed ethical line' in growing human brains

Cross-section of a cerebral organoidIan Sample
The Guardian
Originally posted October 20, 2019

Neuroscientists may have crossed an “ethical rubicon” by growing lumps of human brain in the lab, and in some cases transplanting the tissue into animals, researchers warn.

The creation of mini-brains or brain “organoids” has become one of the hottest fields in modern neuroscience. The blobs of tissue are made from stem cells and, while they are only the size of a pea, some have developed spontaneous brain waves, similar to those seen in premature babies.

Many scientists believe that organoids have the potential to transform medicine by allowing them to probe the living brain like never before. But the work is controversial because it is unclear where it may cross the line into human experimentation.

On Monday, researchers will tell the world’s largest annual meeting of neuroscientists that some scientists working on organoids are “perilously close” to crossing the ethical line, while others may already have done so by creating sentient lumps of brain in the lab.

“If there’s even a possibility of the organoid being sentient, we could be crossing that line,” said Elan Ohayon, the director of the Green Neuroscience Laboratory in San Diego, California. “We don’t want people doing research where there is potential for something to suffer.”

The info is here.

Bridging cognition and emotion in moral decision making: Role of emotion regulation

Raluca D. Szekely and Andrei C. Miu
In M. L. Bryant (Ed.): Handbook on Emotion Regulation: Processes,
Cognitive Effects and Social Consequences. Nova Science, New York

Abstract

In the last decades, the involvement of emotions in moral decision making was investigated using moral dilemmas in healthy volunteers, neuropsychological and psychiatric patients. Recent research characterized emotional experience in moral dilemmas and its association with deontological decisions. Moreover, theories debated the roles of emotion and reasoning in moral decision making and suggested that emotion regulation may be crucial in overriding emotion-driven deontological biases. After briefly introducing the reader to moral dilemma research and current perspectives on emotion and emotion-cognition interactions in this area, the present chapter reviews emerging evidence for emotion regulation in moral decision making. Inspired by recent advances in the field of emotion regulation, this chapter also highlights several avenues for future research on emotion regulation in moral psychology.

The book chapter can be downloaded here.

This is a good summary for those starting to learn about cognition, decision-making models, emotions, and morality.

Wednesday, October 30, 2019

In U.S., Decline of Christianity Continues at Rapid Pace

Pew Research Center
Originally published October 17, 2019

The religious landscape of the United States continues to change at a rapid clip. In Pew Research Center telephone surveys conducted in 2018 and 2019, 65% of American adults describe themselves as Christians when asked about their religion, down 12 percentage points over the past decade. Meanwhile, the religiously unaffiliated share of the population, consisting of people who describe their religious identity as atheist, agnostic or “nothing in particular,” now stands at 26%, up from 17% in 2009.

Both Protestantism and Catholicism are experiencing losses of population share. Currently, 43% of U.S. adults identify with Protestantism, down from 51% in 2009. And one-in-five adults (20%) are Catholic, down from 23% in 2009. Meanwhile, all subsets of the religiously unaffiliated population – a group also known as religious “nones” – have seen their numbers swell. Self-described atheists now account for 4% of U.S. adults, up modestly but significantly from 2% in 2009; agnostics make up 5% of U.S. adults, up from 3% a decade ago; and 17% of Americans now describe their religion as “nothing in particular,” up from 12% in 2009. Members of non-Christian religions also have grown modestly as a share of the adult population.

Most white adults now say they attend religious services a few times a year or less

The info is here.

Punish or Protect? How Close Relationships Shape Responses to Moral Violations

Weidman, A. C., Sowden, W. J., Berg, M. K.,
& Kross, E. (2019).
Personality and Social Psychology Bulletin.
https://doi.org/10.1177/0146167219873485

Abstract

People have fundamental tendencies to punish immoral actors and treat close others altruistically. What happens when these tendencies collide—do people punish or protect close others who behave immorally? Across 10 studies (N = 2,847), we show that people consistently anticipate protecting close others who commit moral infractions, particularly highly severe acts of theft and sexual harassment. This tendency emerged regardless of gender, political orientation, moral foundations, and disgust sensitivity and was driven by concerns about self-interest, loyalty, and harm. We further find that people justify this tendency by planning to discipline close others on their own. We also identify a psychological mechanism that mitigates the tendency to protect close others who have committed severe (but not mild) moral infractions: self-distancing. These findings highlight the role that relational closeness plays in shaping people’s responses to moral violations, underscoring the need to consider relational closeness in future moral psychology work.

From the General Discussion

These findings also clarify the mechanisms through which people reconcile behaving loyally (by protecting close others who commit moral infractions) at the cost of behaving dishonestly while allowing an immoral actor to evade formal punishment (by lying to a police officer). It does not appear that people view close others’ moral infractions as less immoral: A brother’s heinous crime is still a heinous crime.  Instead, when people observe close others behaving immorally, we found through an exploratory linguistic coding analysis that they overwhelmingly intend to enact a lenient form of punishment by confronting the perpetrator to discuss the act. We suspect that doing so allows a person to simultaneously (a) maintain their self-image as a morally upstanding individual and (b) preserve and even enhance the close relationship, in line with the finding in Studies 1d and 1e that protecting close others from legal fallout is viewed as an act of self-interest. These tactics are also broadly consistent with prior work suggesting that people often justify their own immoral acts by focusing on positive consequences of the act or reaffirming their own moral standing (Bandura, 2016). In contrast, we found that when people observe distant others behaving immorally, they report greater intentions to subject these individuals to external, formal means of punishment, such as turning them in to law enforcement or subjecting them to social ostracization.

Tuesday, October 29, 2019

Should we create artificial moral agents? A Critical Analysis

John Danaher
Philosophical Disquisitions
Originally published September 21, 2019

Here is an excerpt:

So what argument is being made? At first, it might look like Sharkey is arguing that moral agency depends on biology, but I think that is a bit of a red herring. What she is arguing is that moral agency depends on emotions (particularly second personal emotions such as empathy, sympathy, shame, regret, anger, resentment etc). She then adds to this the assumption that you cannot have emotions without having a biological substrate. This suggests that Sharkey is making something like the following argument:

(1) You cannot have explicit moral agency without having second personal emotions.

(2) You cannot have second personal emotions without being constituted by a living biological substrate.

(3) Robots cannot be constituted by a living biological substrate.

(4) Therefore, robots cannot have explicit moral agency.

Assuming this is a fair reconstruction of the reasoning, I have some questions about it. First, taking premises (2) and (3) as a pair, I would query whether having a biological substrate really is essential for having second personal emotions. What is the necessary connection between biology and emotionality? This smacks of biological mysterianism or dualism to me, almost a throwback to the time when biologists thought that living creatures possessed some élan vital that separated them from the inanimate world. Modern biology and biochemistry casts all that into doubt. Living creatures are — admittedly extremely complicated — evolved biochemical machines. There is no essential and unbridgeable chasm between the living and the inanimate.

The info is here.

Elon Musk's AI Project to Replicate the Human Brain Receives $1B from Microsoft

Anthony Cuthbertson
The Independent
Originally posted July 23, 2019

Microsoft has invested $1 billion in the Elon Musk-founded artificial intelligence venture that plans to mimic the human brain using computers.

OpenAI said the investment would go towards its efforts of building artificial general intelligence (AGI) that can rival and surpass the cognitive capabilities of humans.

“The creation of AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity,” said OpenAI CEO Sam Altman.

“Our mission is to ensure that AGI technology benefits all of humanity, and we’re working with Microsoft to build the supercomputing foundation on which we’ll build AGI.”

The two firms will jointly build AI supercomputing technologies, which OpenAI plans to commercialise through Microsoft and its Azure cloud computing business.

The info is here.

Monday, October 28, 2019

The Ethics of Contentious Hard Forks in Blockchain Networks With Fixed Features

Tae Wan Kim and Ariel Zetlin-Jones
Front. Blockchain, 28 August 2019
https://doi.org/10.3389/fbloc.2019.00009

An advantage of blockchain protocols is that a decentralized community of users may each update and maintain a public ledger without the need for a trusted third party. Such modifications introduce important economic and ethical considerations that we believe have not been considered among the community of blockchain developers. We clarify the problem and provide one implementable ethical framework that such developers could use to determine which aspects should be immutable and which should not.

(cut)

3. A Normative Framework for Blockchain Design With Fixed Features

Which features of a blockchain protocol should or should not be alterable? To answer this question, we need a normative framework. Our framework is twofold: the substantive and the procedural. The substantive consists of two ethical principles: The generalization principle and the utility-enhancement principle. The procedural has three principles: publicity, revision and appeals, and regulation. All the principles are necessary conditions. The procedural principles help to collectively examine whether any application of the two substantive principles are reasonable. The set of the five principles as a whole is in line with the broadly Kantian deontological approach to justice and democracy (Kant, 1785). In particular, we are partly indebted to Daniels and Sabin (2002) procedural approach to fair allocations of limited resources. Yet, our framework is different from theirs in several ways: the particular context we deal with is different, we replace the controversial “relevance” condition with our own representation of the Kantian generalization principle, and we add the utility-maximization principle. Although we do not offer a fully fledged normative analysis of the given issue, we propose a possible normative framework for cryptocurrency communities.

Dimensions of decision-making: An evidence-based classification of heuristics and biases

A. Ceschia and others
Personality and Individual Differences, 
Volume 146, 1 August 2019, Pages 188-200

Abstract

Traditionally, studies examining decision-making heuristics and biases (H&B) have focused on aggregate effects using between-subjects designs in order to demonstrate violations of rationality. Although H&B are often studied in isolation from others, emerging research has suggested that stable and reliable individual differences in rational thought exist, and similarity in performance across tasks are related, which may suggest an underlying phenotypic structure of decision-making skills. Though numerous theoretical and empirical classifications have been offered, results have been mixed. The current study aimed to clarify this research question. Participants (N = 289) completed a battery of 17 H&B tasks, assessed with a within-subjects design, that we selected based on a review of prior empirical and theoretical taxonomies. Exploratory and confirmatory analyses yielded a solution that suggested that these biases conform to a model composed of three dimensions: Mindware gaps, Valuation biases (i.e., Positive Illusions and Negativity effect), and Anchoring and Adjustment. We discuss these findings in relation to proposed taxonomies and existing studies on individual differences in decision-making.

A pdf of the research can be downloaded here.

Sunday, October 27, 2019

Language Is the Scaffold of the Mind

Anna Ivanova
nautil.us
Originally posted September 26, 2019

Can you imagine a mind without language? More specifically, can you imagine your mind without language? Can you think, plan, or relate to other people if you lack words to help structure your experiences?

Many great thinkers have drawn a strong connection between language and the mind. Oscar Wilde called language “the parent, and not the child, of thought”; Ludwig Wittgenstein claimed that “the limits of my language mean the limits of my world”; and Bertrand Russell stated that the role of language is “to make possible thoughts which could not exist without it.”

After all, language is what makes us human, what lies at the root of our awareness, our intellect, our sense of self. Without it, we cannot plan, cannot communicate, cannot think. Or can we?

Imagine growing up without words. You live in a typical industrialized household, but you are somehow unable to learn the language of your parents. That means that you do not have access to education; you cannot properly communicate with your family other than through a set of idiosyncratic gestures; you never get properly exposed to abstract ideas such as “justice” or “global warming.” All you know comes from direct experience with the world.

It might seem that this scenario is purely hypothetical. There aren’t any cases of language deprivation in modern industrialized societies, right? It turns out there are. Many deaf children born into hearing families face exactly this issue. They cannot hear and, as a result, do not have access to their linguistic environment. Unless the parents learn sign language, the child’s language access will be delayed and, in some cases, missing completely.

The info is here.


Saturday, October 26, 2019

Treatments for the Prevention and Management of Suicide: A Systematic Review.

D'Anci KE, Uhl S, Giradi G, et al.
Ann Intern Med. 
doi: 10.7326/M19-0869

Abstract

Background:
Suicide is a growing public health problem, with the national rate in the United States increasing by 30% from 2000 to 2016.

Purpose:
To assess the benefits and harms of nonpharmacologic and pharmacologic interventions to prevent suicide and reduce suicide behaviors in at-risk adults.

Conclusion:
Both CBT and DBT showed modest benefit in reducing suicidal ideation compared with TAU or wait-list control, and CBT also reduced suicide attempts compared with TAU. Ketamine and lithium reduced the rate of suicide compared with placebo, but there was limited information on harms. Limited data are available to support the efficacy of other nonpharmacologic or pharmacologic interventions.

Discussion

In this SR, we reviewed and synthesized evidence from 8 SRs and 15 RCTs of nonpharmacologic and pharmacologic interventions intended to prevent suicide in at-risk persons. These interventions are a subset of topics included in the updated VA/DoD 2019 CPG for assessment and management of patients at risk for suicide. The full final guideline is available from the VA Web site (www.healthquality.va.gov).

Nonpharmacologic interventions encompassed a range of approaches delivered either face-to-face or via the Internet or other technology. We found moderate-strength evidence supporting the use of face-to-face or Internet-delivered CBT in reducing suicide attempts, suicidal ideation, and hopelessness compared with TAU. We found low-strength evidence suggesting that CBT was not effective in reducing suicides. However, rates of suicide were generally low in the included studies, which limits our ability to draw firm conclusions about this outcome. Data from small studies provide low-strength evidence supporting the use of DBT over client-oriented therapy or control for reducing suicidal ideation. For other outcomes and other comparisons, we found no benefit of DBT. There was low-strength evidence supporting use of WHO-BIC to reduce suicide, CRP to reduce suicide attempts, and Window to Hope to reduce suicidal ideation and hopelessness.

Friday, October 25, 2019

Beyond Crypto — Blockchain Ethics

Jessie Smith
hackernoon.com
Originally posted February 4, 2019

Here is an excerpt:

At its roots, blockchain is an entirely decentralized, non-governed transactional system. It is run through many nodes that all together, result in a blockchain network. Each network contains a ledger. This ledger acts as the source of truth; it stores all of the transactions that have ever happened on the network. Similar to how a bank will store a user’s withdrawal and deposit transactions, a blockchain ledger will store every transaction that has occurred on a network. The ledger is publicly available to all of the nodes in the network.

Bitcoin miners can run their own nodes (computer hardware) in hopes of obtaining a bitcoin through the combination of processing power and a little bit of luck. The difference between a bank’s ledger and a blockchain ledger is that a bank can make changes to their ledger at any point in time, since they hold all of the power. A blockchain ledger on the other hand doesn’t belong to any central entity. It is accessible and owned by every node in the network, and is entirely immutable.

Without a central governing entity over a network, every transaction needs to be verified by a majority of the nodes. Transactions can include transferring cryptocurrency between two people, reversing old transactions, spending coins, and even blocking miners from using their own nodes. For example, if someone wanted to transfer their bitcoins to someone else, they would need their transaction to be verified by at least half of all the nodes in a network.

The info is here.

Deciding Versus Reacting:Conceptions of Moral Judgment and the Reason-Affect Debate

Monin, B., Pizarro, D. A., & Beer, J. S. (2007).
Review of General Psychology, 11(2), 99–111.
https://doi.org/10.1037/1089-2680.11.2.99

Abstract

Recent approaches to moral judgment have typically pitted emotion against reason. In an effort to move beyond this debate, we propose that authors presenting diverging models are considering quite different prototypical situations: those focusing on the resolution of complex dilemmas conclude that morality involves sophisticated reasoning, whereas those studying reactions to shocking moral violations find that morality involves quick, affect-laden processes. We articulate these diverging dominant approaches and consider three directions for future research (moral temptation, moral self-image, and lay understandings of morality) that we propose have not received sufficient attention as a result of the focus on these two prototypical situations within moral psychology.

Concluding Thoughts

Recent theorizing on the psychology of moral decision making has pitted deliberative reasoning against quick affect-laden intuitions. In this article, we propose a resolution to this tension by arguing that it results from a choice of different prototypical situations: advocates of the reasoning approach have focused on sophisticated dilemmas, whereas advocates of the intuition/emotion approach have focused on reactions to other people’s moral infractions. Arbitrarily choosing one or the other as the typical moral situation has a significant impact on one’s characterization of moral judgment.

Thursday, October 24, 2019

Facebook isn’t free speech, it’s algorithmic amplification optimized for outrage

Jon Evans
techcrunch.com
Originally published October 20, 2019

This week Mark Zuckerberg gave a speech in which he extolled “giving everyone a voice” and fighting “to uphold a wide a definition of freedom of expression as possible.” That sounds great, of course! Freedom of expression is a cornerstone, if not the cornerstone, of liberal democracy. Who could be opposed to that?

The problem is that Facebook doesn’t offer free speech; it offers free amplification. No one would much care about anything you posted to Facebook, no matter how false or hateful, if people had to navigate to your particular page to read your rantings, as in the very early days of the site.

But what people actually read on Facebook is what’s in their News Feed … and its contents, in turn, are determined not by giving everyone an equal voice, and not by a strict chronological timeline. What you read on Facebook is determined entirely by Facebook’s algorithm, which elides much — censors much, if you wrongly think the News Feed is free speech — and amplifies little.

What is amplified? Two forms of content. For native content, the algorithm optimizes for engagement. This in turn means people spend more time on Facebook, and therefore more time in the company of that other form of content which is amplified: paid advertising.

Of course this isn’t absolute. As Zuckerberg notes in his speech, Facebook works to stop things like hoaxes and medical misinformation from going viral, even if they’re otherwise anointed by the algorithm. But he has specifically decided that Facebook will not attempt to stop paid political misinformation from going viral.

The info is here.

Editor's note: Facebook is one of the most defective products that millions of Americans use everyday.

The consciousness illusion

Keith Frankish
aeon.co
Originally published September 26, 2019

Here is an excerpt:

The first concerns explanatory simplicity. If we observe something science can’t explain, then the simplest hypothesis is that it’s an illusion, especially if it can be observed only from one particular angle. This is exactly the case with phenomenal consciousness. Phenomenal properties cannot be explained in standard scientific ways and can be observed only from the first-person viewpoint (no one but me can experience my sensations). This does not show that they aren’t real. It could be that we need to radically rethink our science but, as Dennett says, the theory that they are illusory is the obvious default one.

A second argument concerns our awareness of phenomenal properties. We are aware of features of the natural world only if we have a sensory system that can detect them and generate representations of them for use by other mental systems. This applies equally to features of our own minds (which are parts of the natural world), and it would apply to phenomenal properties too, if they were real. We would need an introspective system that could detect them and produce representations of them. Without that, we would have no more awareness of our brains’ phenomenal properties than we do of their magnetic properties. In short, if we were aware of phenomenal properties, it would be by virtue of having mental representations of them. But then it would make no difference whether these representations were accurate. Illusory representations would have the same effects as veridical ones. If introspection misrepresents us as having phenomenal properties then, subjectively, that’s as good as actually having them. Since science indicates that our brains don’t have phenomenal properties, the obvious inference is that our introspective representations of them are illusory.

There is also a specific argument for preferring illusionism to property dualism. In general, if we can explain our beliefs about something without mentioning the thing itself, then we should discount the beliefs.

The info is here.

Wednesday, October 23, 2019

The Moral Choices on CRISPR Babies

Sheldon Krimsky
The American Journal of Bioethics
Published online September 26, 2019

In late November 2018, Chinese scientist Dr. He Jiankui announced at the Second International Summit on Human Genome Editing in Hong Kong that he had used CRISPR/Cas 9 gene editing on two female embryos that were brought to term through an in vitro fertilization (IVF) pregnancy. The world scientific community was ill-prepared for the announcement since the moral issues surrounding the editing of human embryos were under discussion but hardly resolved. The recommendation of the 12-member organizing committee of the 2015 International Summit on Human Gene Editing in Washington, DC, stated that it would be irresponsible to undertake any clinical use of germline editing unless and until the safety and efficacy issues were resolved and there was a broad consensus on the specific application, and that such use could proceed only under appropriate regulatory oversight (International Summit on Human Genome Editing 2015). Similar recommendations were made by the National Academies of Sciences, Engineering, and Medicine. This editorial gives two reasons for genetically modifying embryos and three reasons against it.

The summary of arguments for and against genetically modifying embryos is here.

Supreme Court Ethics Reform

Johanna Kalb and Alicia Bannon
Brennan Center for Justice
Originally published September 24, 2019

Today, the nine justices on the Supreme Court are the only U.S. judges — state or federal — not governed by a code of ethical conduct. But that may be about to change. Justice Elena Kagan recently testified during a congressional budget hearing that Chief Justice John Roberts is exploring whether to develop an ethical code for the Court. This was big news, given that the chief justice has previously rejected the need for a Supreme Court ethics code.

In fact, however, the Supreme Court regularly faces challenging ethical questions, and because of their crucial and prominent role, the justices receive intense public scrutiny for their choices. Over the last two decades, almost all members of the Supreme Court have been criticized for engaging in behaviors that are forbidden to other federal court judges, including participating in partisan convenings or fundraisers, accepting expensive gifts or travel, making partisan comments at public events or in the media, or failing to recuse themselves from cases involving apparent conflicts of interest, either financial or personal. Congress has also taken notice of the problem. The For the People Act, which was passed in March 2019 by the House of Representatives, included the latest of a series of proposals by both Republican and Democratic legislators to clarify the ethical standards that apply to the justices’ behavior.

The info is here.

Tuesday, October 22, 2019

AI used for first time in job interviews in UK to find best applicants

Charles Hymas
The Telegraph
Originally posted September 27, 2019

Artificial intelligence (AI) and facial expression technology is being used for the first time in job interviews in the UK to identify the best candidates.

Unilever, the consumer goods giant, is among companies using AI technology to analyse the language, tone and facial expressions of candidates when they are asked a set of identical job questions which they film on their mobile phone or laptop.

The algorithms select the best applicants by assessing their performances in the videos against about 25,000 pieces of facial and linguistic information compiled from previous interviews of those who have gone on to prove to be good at the job.

Hirevue, the US company which has developed the interview technology, claims it enables hiring firms to interview more candidates in the initial stage rather than simply relying on CVs and that it provides a more reliable and objective indicator of future performance free of human bias.

However, academics and campaigners warned that any AI or facial recognition technology would inevitably have in-built biases in its databases that could discriminate against some candidates and exclude talented applicants who might not conform to the norm.

The info is here.

Is Editing the Genome for Climate Change Adaptation Ethically Justifiable?

Lisa Soleymani Lehmann
AMA J Ethics. 2017;19(12):1186-1192.

Abstract

As climate change progresses, we humans might have to inhabit a world for which we are increasingly maladapted. If we were able to identify genes that directly influence our ability to thrive in a changing climate, would it be ethically justifiable to edit the human genome to enhance our ability to adapt to this new environment? Should we use gene editing not only to prevent significant disease but also to enhance our ability to function in the world? Here I suggest a “4-S framework” for analyzing the justifiability of gene editing that includes these considerations: (1) safety, (2) significance of harm to be averted, (3) succeeding generations, and (4) social consequences.

Conclusion

Gene editing has unprecedented potential to improve human health. CRISPR/Cas9 has a specificity and simplicity that opens up wide possibilities. If we are unable to prevent serious negative health consequences of climate change through environmental and public health measures, gene editing could have a role in helping human beings adapt to new environmental conditions. Any decision to proceed should apply the 4-S framework.

The info is here.

Monday, October 21, 2019

An ethicist weighs in on our moral failure to act on climate change

Monique Deveaux
The Conversation
Originally published September 26, 2019

Here is an excerpt:

This call to collective moral and political responsibility is exactly right. As individuals, we can all be held accountable for helping to stop the undeniable environmental harms around us and the catastrophic threat posed by rising levels of CO2 and other greenhouse gases. Those of us with a degree of privilege and influence have an even greater responsibility to assist and advocate on behalf of those most vulnerable to the effects of global warming.

This group includes children everywhere whose futures are uncertain at best, terrifying at worst. It also includes those who are already suffering from severe weather events and rising water levels caused by global warming, and communities dispossessed by fossil fuel extraction. Indigenous peoples around the globe whose lands and water systems are being confiscated and polluted in the search for ever more sources of oil, gas and coal are owed our support and assistance. So are marginalized communities displaced by mountaintop removal and destructive dam energy projects, climate refugees and many others.

The message of climate activists is that we can't fulfill our responsibilities simply by making green choices as consumers or expressing support for their cause. The late American political philosopher Iris Young thought that we could only discharge our "political responsibility for injustice," as she put it, through collective political action.

The interests of the powerful, she warned, conflict with the political responsibility to take actions that challenge the status quo—but which are necessary to reverse injustices.

As the striking school children and older climate activists everywhere have repeatedly pointed out, political leaders have so far failed to enact the carbon emissions reduction policies that are so desperately needed. Despite UN Secretary General António Guterres' sombre words of warning at the Climate Action Summit, the UN is largely powerless in the face of governments that refuse to enact meaningful carbon-reducing policies, such as China and the U.S.

The info is here.

Moral Judgment as Categorization

Cillian McHugh, and others
PsyArXiv
Originally posted September 17, 2019

Abstract

We propose that the making of moral judgments is an act of categorization; people categorize events, behaviors, or people as ‘right’ or ‘wrong’. This approach builds on the currently dominant dual-processing approach to moral judgment in the literature, providing important links to developmental mechanisms in category formation, while avoiding recently developed critiques of dual-systems views. Stable categories are the result of skill in making context-relevant categorizations. People learn that various objects (events, behaviors, people etc.) can be categorized as ‘right’ or ‘wrong’. Repetition and rehearsal then results in these categorizations becoming habitualized. According to this skill formation account of moral categorization, the learning, and the habitualization of the forming of, moral categories, occurs as part of goal-directed activity, and is sensitive to various contextual influences. Reviewing the literature we highlight the essential similarity of categorization principles and processes of moral judgments. Using a categorization framework, we provide an overview of moral category formation as basis for moral judgments. The implications for our understanding of the making of moral judgments are discussed.

Conclusion

We propose a revisiting of the categorization approach to the understanding of moral judgment proposed by Stich (1993).  This approach, in providing a coherent account of the emergence of stability in the formation of moral categories, provides an account of the emergence of moral intuitions.  This account of the emergence of moral intuitions predicts that emergent stable moral intuitions will mirror real-world social norms or collectively agreed moral principles.  It is also possible that the emergence of moral intuitions can be informed by prior reasoning, allowing for the so called “intelligence” of moral intuitions (e.g., Pizarro & Bloom, 2003; Royzman, Kim, & Leeman, 2015).  This may even allow for the traditionally opposing rationalist and intuitionist positions (e.g., Fine, 2006; Haidt, 2001; Hume, 2000/1748; Kant, 1959/1785; Kennett & Fine, 2009; Kohlberg, 1971; Nussbaum & Kahan, 1996; Cameron et al., 2013; Prinz, 2005; Pizarro & Bloom, 2003; Royzman et al., 2015; see also Mallon & Nichols, 2010, p. 299) to be integrated.  In addition, the account of the emergence of moral intuitions described here is also consistent with discussions of the emergence of moral heuristics (e.g., Gigerenzer, 2008; Sinnott-Armstrong, Young, & Cushman, 2010).

The research is here.

Sunday, October 20, 2019

Moral Judgment and Decision Making

Bartels, D. M., and others (2015)
In G. Keren & G. Wu (Eds.)
The Wiley Blackwell Handbook of Judgment and Decision Making.

From the Introduction

Our focus in this essay is moral flexibility, a term that we use to capture to the thesis that people are strongly motivated to adhere to and affirm their moral beliefs in their judgments and choices—they really want to get it right, they really want to do the right thing—but context strongly influences which moral beliefs are brought to bear in a given situation (cf. Bartels, 2008). In what follows, we review contemporary research on moral judgment and decision making and suggest ways that the major themes in the literature relate to the notion of moral flexibility. First, we take a step back and explain what makes moral judgment and decision making unique. We then review three major research themes and their explananda: (i) morally prohibited value tradeoffs in decision making, (ii) rules, reason, and emotion in tradeoffs, and (iii) judgments of moral blame and punishment. We conclude by commenting on methodological desiderata and presenting understudied areas of inquiry.

Conclusion

Moral thinking pervades everyday decision making, and so understanding the psychological underpinnings of moral judgment and decision making is an important goal for the behavioral sciences. Research that focuses on rule-based models makes moral decisions appear straightforward and rigid, but our review suggests that they more complicated. Our attempt to document the state of the field reveals the diversity of approaches that (indirectly) reveals the flexibility of moral decision making systems. Whether they are study participants, policy makers, or the person on the street, people are strongly motivated to adhere to and affirm their moral beliefs—they want to make the right judgments and choices, and do the right thing. But what is right and wrong, like many things, depends in part on the situation. So while moral judgments and choices can be accurately characterized as using moral rules, they are also characterized by a striking ability to adapt to situations that require flexibility.

Consistent with this theme, our review suggests that context strongly influences which moral principles people use to judge actions and actors and that apparent inconsistencies across situations need not be interpreted as evidence of moral bias, error, hypocrisy, weakness, or failure.  One implication of the evidence for moral flexibility we have presented is that it might be difficult for any single framework to capture moral judgments and decisions (and this may help explain why no fully descriptive and consensus model of moral judgment and decision making exists despite decades of research). While several interesting puzzle pieces have been identified, the big picture remains unclear. We cannot even be certain that all of these pieces belong to just one puzzle.  Fortunately for researchers interested in this area, there is much left to be learned, and we suspect that the coming decades will budge us closer to a complete understanding of moral judgment and decision making.

A pdf of the book chapter can be downloaded here.

Saturday, October 19, 2019

Forensic Clinicians’ Understanding of Bias

Tess Neal, Nina MacLean, Robert D. Morgan,
and Daniel C. Murrie
Psychology, Public Policy, and Law, 
Sep 16 , 2019, No Pagination Specified

Abstract:

Bias, or systematic influences that create errors in judgment, can affect psychological evaluations in ways that lead to erroneous diagnoses and opinions. Although these errors can have especially serious consequences in the criminal justice system, little research has addressed forensic psychologists’ awareness of well-known cognitive biases and debiasing strategies. We conducted a national survey with a sample of 120 randomly-selected licensed psychologists with forensic interests to examine a) their familiarity with and understanding of cognitive biases, b) their self-reported strategies to mitigate bias, and c) the relation of a and b to psychologists’ cognitive reflection abilities. Most psychologists reported familiarity with well-known biases and distinguished these from sham biases, and reported using research-identified strategies but not fictional/sham strategies. However, some psychologists reported little familiarity with actual biases, endorsed sham biases as real, failed to recognize effective bias mitigation strategies, and endorsed ineffective bias mitigation strategies. Furthermore, nearly everyone endorsed introspection (a strategy known to be ineffective) as an effective bias mitigation strategy. Cognitive reflection abilities were systematically related to error, such that stronger cognitive reflection was associated with less endorsement of sham biases.

Here is the conclusion:

These findings (along with Neal & Brodsky’s, 2016) suggest that forensic clinicians are in need of additional training not only to recognize biases but perhaps to begin to effectively mitigate harm from biases. For example, in predoctoral (e.g., internship) and postdoctoral (fellowships), didactic training could address bias, recognizing bias and providing strategies for minimizing bias. Additionally, supervisors could address identifying and reducing bias as a regular part of supervision (e.g., by including this as part of case conceptualization). However, further research is needed to determine the types of training and workflow strategies that best reduce bias. Future studies should focus on experimentally examining the presence of biases and ways to mitigate their effects in forensic evaluations.

The research is here.

Friday, October 18, 2019

Code of Ethics Can Guide Responsible Data Use

Katherine Noyes
The Wall Street Journal
Originally posted September 26, 2019

Here is an excerpt:

Associated with these exploding data volumes are plenty of practical challenges to overcome—storage, networking, and security, to name just a few—but far less straightforward are the serious ethical concerns. Data may promise untold opportunity to solve some of the largest problems facing humanity today, but it also has the potential to cause great harm due to human negligence, naivety, and deliberate malfeasance, Patil pointed out. From data breaches to accidents caused by self-driving vehicles to algorithms that incorporate racial biases, “we must expect to see the harm from data increase.”

Health care data may be particularly fraught with challenges. “MRIs and countless other data elements are all digitized, and that data is fragmented across thousands of databases with no easy way to bring it together,” Patil said. “That prevents patients’ access to data and research.” Meanwhile, even as clinical trials and numerous other sources continually supplement data volumes, women and minorities remain chronically underrepresented in many such studies. “We have to reboot and rebuild this whole system,” Patil said.

What the world of technology and data science needs is a code of ethics—a set of principles, akin to the Hippocratic Oath, that guides practitioners’ uses of data going forward, Patil suggested. “Data is a force multiplier that offers tremendous opportunity for change and transformation,” he explained, “but if we don’t do it right, the implications will be far worse than we can appreciate, in all sorts of ways.”

The info is here.

The Koch-backed right-to-try law has been a bust, but still threatens our health

Michael Hiltzik
The Los Angeles Times
Originally posted September 17, 2019

The federal right-to-try law, signed by President Trump in May 2018 as a sop to right-wing interests, including the Koch brothers network, always was a cruel sham perpetrated on sufferers of intractably fatal diseases.

As we’ve reported, the law was promoted as a compassionate path to experimental treatments for those patients — but in fact was a cynical ploy aimed at emasculating the Food and Drug Administration in a way that would undermine public health and harm all patients.

Now that a year has passed since the law’s enactment, the assessments of how it has functioned are beginning to flow in. As NYU bioethicist Arthur Caplan observed to Ed Silverman’s Pharmalot blog, “the right to try remains a bust.”

His judgment is seconded by the veteran pseudoscience debunker David Gorski, who writes: “Right-to-try has been a spectacular failure thus far at getting terminally ill patients access to experimental drugs.”

That should come as no surprise, Gorski adds, because “right-to-try was never about helping terminally ill patients. ... It was always about ideology more than anything else. It was always about weakening the FDA’s ability to regulate drug approval.”

The info is here.

Thursday, October 17, 2019

AI ethics and the limits of code(s)

Machine learningGeoff Mulgan
nesta.org.uk
Originally published September 16, 2019

Here is an excerpt:

1. Ethics involve context and interpretation - not just deduction from codes.

Too much writing about AI ethics uses a misleading model of what ethics means in practice. It assumes that ethics can be distilled into principles from which conclusions can then be deduced, like a code. The last few years have brought a glut of lists of principles (including some produced by colleagues at Nesta). Various overviews have been attempted in recent years. A recent AI Ethics Guidelines Global Inventory collects over 80 different ethical frameworks. There’s nothing wrong with any of them and all are perfectly sensible and reasonable. But this isn’t how most ethical reasoning happens. The lists assume that ethics is largely deductive, when in fact it is interpretive and context specific, as is wisdom. One basic reason is that the principles often point in opposite directions - for example, autonomy, justice and transparency. Indeed, this is also the lesson of medical ethics over many decades. Intense conversation about specific examples, working through difficult ambiguities and contradictions, counts for a lot more than generic principles.

The info is here.

Why Having a Chief Data Ethics Officer is Worth Consideration

The National Law Review
Image result for chief data ethics officerOriginally published September 20, 2019

Emerging technology has vastly outpaced corporate governance and strategy, and the use of data in the past has consistently been “grab it” and figure out a way to use it and monetize it later. Today’s consumers are becoming more educated and savvy about how companies are collecting, using and monetizing their data, and are starting to make buying decisions based on privacy considerations, and complaining to regulators and law makers about how the tech industry is using their data without their control or authorization.

Although consumers’ education is slowly deepening, data privacy laws, both internationally and in the U.S., are starting to address consumers’ concerns about the vast amount of individually identifiable data about them that is collected, used and disclosed.

Data ethics is something that big tech companies are starting to look at (rightfully so), because consumers, regulators and lawmakers are requiring them to do so. But tech companies should consider looking at data ethics as a fundamental core value of the company’s mission, and should determine how they will be addressed in their corporate governance structure.

The info is here.

Wednesday, October 16, 2019

Birmingham psychologist defrauded state Medicaid of more than $1.5 million, authorities say

Carol Robinson
Sharon Waltz
al.com
Originally published August 15, 2019

A Birmingham psychologist has been charged with defrauding the Alabama Medicaid Agency of more than $1 million by filing false claims for counseling services that were not provided.

Sharon D. Waltz, 50, has agreed to plead guilty to the charge and pay restitution in the amount of $1.5 million, according to a joint announcement Thursday by Northern District of Alabama U.S. Attorney Jay Town, Department of Health and Human Services -Office of Inspector General Special Agent Derrick L. Jackson and Alabama Attorney General Steve Marshall.

“The greed of this defendant deprived mental health care to many at-risk young people in Alabama, with the focus on profit rather than the efficacy of care,” Town said. “The costs are not just monetary but have social and health impacts on the entire Northern District. This prosecution, and this investigation, demonstrates what is possible when federal and state law enforcement agencies work together.”

The info is here.

Tribalism is Human Nature

Clark, Cory & Liu, Brittany & Winegard, Bo & Ditto, Peter.  (2019).
Current Directions in Psychological Science. 
10.1177/0963721419862289.

Abstract

Humans evolved in the context of intense intergroup competition, and groups comprised of loyal members more often succeeded than those that were not. Therefore, selective pressures have consistently sculpted human minds to be "tribal," and group loyalty and concomitant cognitive biases likely exist in all groups. Modern politics is one of the most salient forms of modern coalitional conflict and elicits substantial cognitive biases. Given the common evolutionary history of liberals and conservatives, there is little reason to expect pro-tribe biases to be higher on one side of the political spectrum than the other. We call this the evolutionarily plausible null hypothesis and recent research has supported it. In a recent meta-analysis, liberals and conservatives showed similar levels of partisan bias, and a number of pro-tribe cognitive tendencies often ascribed to conservatives (e.g., intolerance toward dissimilar others) have been found in similar degrees in liberals. We conclude that tribal bias is a natural and nearly ineradicable feature of human cognition, and that no group—not even one’s own—is immune.

Conclusion 

Humans are tribal creatures. They were not designed to reason dispassionately about the world; rather, they were designed to reason in ways that promote the interests of their coalition (and hence, themselves). It would therefore be surprising if a particular group of individuals did not display such tendencies, and recent work suggests, at least in the U.S. political sphere, that both liberals and conservatives are substantially biased—and to similar degrees. Historically, and perhaps even in modern society, these tribal biases are quite useful for group cohesion but perhaps also for other moral purposes (e.g., liberal bias in favor of disadvantaged groups might help increase equality). Also, it is worth noting that a bias toward viewing one’s own tribe in a favorable light is not necessarily irrational. If one’s goal is to be admired among one’s own tribe, fervidly supporting their agenda and promoting their goals, even if that means having or promoting erroneous beliefs, is often a reasonable strategy (Kahan et al., 2017). The incentives for holding an accurate opinion about global climate change, for example, may not be worth the 12 social rejection and loss of status that could accompany challenging the views of one’s political ingroup.

The info is here.

Tuesday, October 15, 2019

Want To Reduce Suicides? Follow The Data — To Medical Offices, Motels And Even Animal Shelters

Maureen O’Hagan
Kaiser Health News
Originally published September 23, 2019

Here is an excerpt:

Experts have long believed that suicide is preventable, and there are evidence-based programs to train people how to identify and respond to folks in crisis and direct them to help. That’s where Debra Darmata, Washington County’s suicide prevention coordinator, comes in. Part of Darmata’s job involves running these training programs, which she described as like CPR but for mental health.

The training is typically offered to people like counselors, educators or pastors. But with the new data, the county realized they were missing people who may have been the last to see the decedents alive. They began offering the training to motel clerks and housekeepers, animal shelter workers, pain clinic staffers and more.

It is a relatively straightforward process: Participants are taught to recognize signs of distress. Then they learn how to ask a person if he or she is in crisis. If so, the participants’ role is not to make the person feel better or to provide counseling or anything of the sort. It is to call a crisis line, and the experts will take over from there.

Since 2014, Darmata said, more than 4,000 county residents have received training in suicide prevention.

“I’ve worked in suicide prevention for 11 years,” Darmata said, “and I’ve never seen anything like it.”

The sheriff’s office has begun sending a deputy from its mental health crisis team when doing evictions. On the eviction paperwork, they added the crisis line number and information on a county walk-in mental health clinic. Local health care organizations have new procedures to review cases involving patient suicides, too.

The info is here.

Why not common morality?

Rhodes R 
Journal of Medical Ethics 
Published Online First: 11 September 2019. 
doi: 10.1136/medethics-2019-105621

Abstract

This paper challenges the leading common morality accounts of medical ethics which hold that medical ethics is nothing but the ethics of everyday life applied to today’s high-tech medicine. Using illustrative examples, the paper shows that neither the Beauchamp and Childress four-principle account of medical ethics nor the Gert et al 10-rule version is an adequate and appropriate guide for physicians’ actions. By demonstrating that medical ethics is distinctly different from the ethics of everyday life and cannot be derived from it, the paper argues that medical professionals need a touchstone other than common morality for guiding their professional decisions. That conclusion implies that a new theory of medical ethics is needed to replace common morality as the standard for understanding how medical professionals should behave and what medical professionalism entails. En route to making this argument, the paper addresses fundamental issues that require clarification: what is a profession? how is a profession different from a role? how is medical ethics related to medical professionalism? The paper concludes with a preliminary sketch for a theory of medical ethics.

Monday, October 14, 2019

Principles of karmic accounting: How our intuitive moral sense balances rights and wrongs

Samuel Johnson and Jaye Ahn
PsyArXiv
Originally posted September 10, 2019

Abstract

We are all saints and sinners: Some of our actions benefit other people, while other actions harm people. How do people balance moral rights against moral wrongs when evaluating others’ actions? Across 9 studies, we contrast the predictions of three conceptions of intuitive morality—outcome- based (utilitarian), act-based (deontologist), and person-based (virtue ethics) approaches. Although good acts can partly offset bad acts—consistent with utilitarianism—they do so incompletely and in a manner relatively insensitive to magnitude, but sensitive to temporal order and the match between who is helped and harmed. Inferences about personal moral character best predicted blame judgments, explaining variance across items and across participants. However, there was modest evidence for both deontological and utilitarian processes too. These findings contribute to conversations about moral psychology and person perception, and may have policy implications.

General Discussion

These  studies  begin  to  map  out  the  principles  governing  how  the  mind  combines  rights  and wrongs to form summary judgments of blameworthiness. Moreover, these principles are explained by inferences  about  character,  which  also  explain  differences  across  scenarios  and  participants.  These results overall buttress person-based accounts of morality (Uhlmann et al., 2014), according to which morality  serves  primarily  to  identify  and  track  individuals  likely  to  be  cooperative  and  trustworthy social partners in the future.

These results also have implications for moral psychology beyond third-party judgments. Moral behavior is motivated largely by its expected reputational consequences, thus studying the psychology of  third-party  reputational  judgments  is  key  for  understanding  people’s  behavior  when  they  have opportunities  to  perform  licensing  or  offsetting acts.  For  example,  theories  of  moral  self-licensing (Merritt et al., 2010) disagree over whether licensing occurs due to moral credits (i.e., having done good, one can now “spend” the moral credit on a harm) versus moral credentials (i.e., having done good, later bad  acts  are  reframed  as  less  blameworthy). 

The research is here.

Why we don’t always punish: Preferences for non-punitive responses to moral violations

Joseph Heffner & Oriel FeldmanHall
Scientific Reports, volume 9, 
Article number: 13219 (2019) 

Abstract

While decades of research demonstrate that people punish unfair treatment, recent work illustrates that alternative, non-punitive responses may also be preferred. Across five studies (N = 1,010) we examine non-punitive methods for restoring justice. We find that in the wake of a fairness violation, compensation is preferred to punishment, and once maximal compensation is available, punishment is no longer the favored response. Furthermore, compensating the victim—as a method for restoring justice—also generalizes to judgments of more severe crimes: participants allocate more compensation to the victim as perceived severity of the crime increases. Why might someone refrain from punishing a perpetrator? We investigate one possible explanation, finding that punishment acts as a conduit for different moral signals depending on the social context in which it arises. When choosing partners for social exchange, there are stronger preferences for those who previously punished as third-party observers but not those who punished as victims. This is in part because third-parties are perceived as relatively more moral when they punish, while victims are not. Together, these findings demonstrate that non-punitive alternatives can act as effective avenues for restoring justice, while also highlighting that moral reputation hinges on whether punishment is enacted by victims or third-parties.

The research is here.

Readers may want to think about patients in psychotherapy and licensing board actions.

Sunday, October 13, 2019

A Successful Artificial Memory Has Been Created

Robert Martone
A Successful Artificial Memory Has Been CreatedScientific American
Originally posted August27, 2019

Here is the conclusion:

There are legitimate motives underlying these efforts. Memory has been called “the scribe of the soul,” and it is the source of one’s personal history. Some people may seek to recover lost or partially lost memories. Others, such as those afflicted with post-traumatic stress disorder or chronic pain, might seek relief from traumatic memories by trying to erase them.

The methods used here to create artificial memories will not be employed in humans anytime soon: none of us are transgenic like the animals used in the experiment, nor are we likely to accept multiple implanted fiber-optic cables and viral injections. Nevertheless, as technologies and strategies evolve, the possibility of manipulating human memories becomes all the more real. And the involvement of military agencies such as DARPA invariably renders the motivations behind these efforts suspect. Are there things we all need to be afraid of or that we must or must not do? The dystopian possibilities are obvious.

Creating artificial memories brings us closer to learning how memories form and could ultimately help us understand and treat dreadful diseases such as Alzheimer’s. Memories, however, cut to the core of our humanity, and we need to be vigilant that any manipulations are approached ethically.

The info is here.

Saturday, October 12, 2019

Lolita understood that some sex is transactional. So did I

<p>Detail from film poster for <em>Lolita </em>(1962). <em>Photo by Getty</em></p>Tamara MacLeod
aeon.co
Originally published September 11, 2019

Here is an excerpt:

However, I think that it is the middle-class consciousness of liberal feminism that excluded sex work from its platform. After all, wealthier women didn’t need to do sex work as such; they operated within the state-sanctioned transactional boundaries of marriage. The dissatisfaction of the 20th-century housewife was codified as a struggle for liberty and independence as an addition to subsidised material existence, making a feminist discourse on work less about what one has to do, and more about what one wants to do. A distinction within women’s work emerged: if you don’t enjoy having sex with your husband, it’s just a problem with the marriage. If you don’t enjoy sex with a client, it’s because you can’t consent to your own exploitation. It is a binary view of sex and consent, work and not-work, when the reality is somewhat murkier. It is a stubborn blindness to the complexity of human relations, and maybe of human psychology itself, descending from the viscera-obsessed, radical absolutisms of Andrea Dworkin.

The housewife who married for money and then fakes orgasms, the single mother who has sex with a man she doesn’t really like because he’s offering her some respite: where are the delineations between consent and exploitation, sex and duty? The first time I traded sex for material gain, I had some choices, but they were limited. I chose to be exploited by the man with the resources I needed, choosing his house over homelessness. Lolita was a child, and she was exploited, but she was also conscious of the function of her body in a patriarchal economy. Philosophically speaking, most of us do indeed consent to our own exploitation.

The info is here.

Friday, October 11, 2019

Dying is a Moral Event. NJ Law Caught Up With Morality

T. Patrick Hill
Star-Ledge Guest Column
Originally posted September 9, 2019

New Jersey’s Medical-Aid-in-Dying legislation authorizes physicians to issue a prescription to end the lives of their patients who have been diagnosed with a terminal illness, are expected to die within six months, and have requested their physicians to help them do so. While the legislation does not require physicians to issue the prescription, it does require them to transfer a patient’s medical records to another physician who has agreed to prescribe the lethal medication.

(cut)

The Medical Aid in Dying Act goes even further, concluding that its passage serves the public’s interests, even as it endorses the “right of a qualified terminally ill patient …to obtain medication that the patient may choose to self-administer in order to bring about the patient’s humane and dignified death.”

The info is here.

Is there a right to die?

Eric Mathison
Baylor Medical College of Medicine Blog
Originally posted May 31, 2019

How people think about death is undergoing a major transformation in the United States. In the past decade, there has been a significant rise in assisted dying legalization, and more states are likely to legalize it soon.

People are adapting to a healthcare system that is adept at keeping people alive, but struggles when aggressive treatment is no longer best for the patient. Many people have concluded, after witnessing a loved one suffer through a prolonged dying process, that they don’t want that kind of death for themselves.

Public support for assisted dying is high. Gallup has tracked Americans’ support for it since 1951. The most recent survey, from 2017, found that 73% of Americans support legalization. Eighty-one percent of Democrats and 67% of Republicans support it, making this a popular policy regardless of political affiliation.

The effect has been a recent surge of states passing assisted dying legislation. New Jersey passed legislation in April, meaning seven states (plus the District of Columbia) now allow it. In addition to New Jersey, California, Colorado, Hawaii, and D.C. all passed legislation in the past three years, and seventeen states are considering legislation this year. Currently, around 20% of Americans live in states where assisted dying is legal.

The info is here.

Thursday, October 10, 2019

Moral Distress and Moral Strength Among Clinicians in Health Care Systems: A Call for Research

Connie M. Ulrich and Christine Grady
NAM Perspectives. 
https://doi.org/10.31478/201909c


Here is an excerpt:

Evidence shows that dissatisfaction and wanting to leave one’s job—and the profession altogether—often follow morally distressing encounters. Ethics education that builds cognitive and communication skills, teaches clinicians ethical concepts, and helps them gain communication skills and confidence may be essential in building moral strength. One study found, for example, that among practicing nurses and social workers, those with the least ethics education were also the least confident, the least likely to use ethics resources (if available), and the least likely to act on their ethical concerns. In this national study, as many as 23 percent of nurses reported having had no ethics education at all. But the question remains—is ethics education enough?

Many factors likely support or hinder a clinician’s capacity and willingness to act with moral strength. More research is needed to investigate how interdisciplinary ethics education and institutional resources can help nurses, physicians, and others voice their ethical concerns, help them agree on morally acceptable actions, and support their capacity and propensity to act with moral strength and confidence. Research on moral distress and ethical concerns in everyday clinical practice can begin to build a knowledge base that will inform clinical training—in both educational and health care institutions—and that will help create organizational structures and processes to prepare and support clinicians to encounter potentially distressing situations with moral strength. Research can help tease out what is important and predictive for taking (or not taking) ethical action in morally distressing circumstances. This knowledge would be useful for designing strategies to support clinician well-being. Indeed, studies should focus on the influences that affect clinicians’ ability and willingness to become involved or take ownership of ethically-laden patient care issues, and their level of confidence in doing so.

Our illusory sense of agency has a deeply important social purpose

<p>French captain Zinedine Zidane is sent off during the 2006 World Cup final in Germany. <em>Photo by Shaun Botterill/Getty</em></p>Chris Frith
aeon.com
Originally published September 22, 2019

Here are two excerpts:

We humans like to think of ourselves as mindful creatures. We have a vivid awareness of our subjective experience and a sense that we can choose how to act – in other words, that our conscious states are what cause our behaviour. Afterwards, if we want to, we might explain what we’ve done and why. But the way we justify our actions is fundamentally different from deciding what to do in the first place.

Or is it? Most of the time our perception of conscious control is an illusion. Many neuroscientific and psychological studies confirm that the brain’s ‘automatic pilot’ is usually in the driving seat, with little or no need for ‘us’ to be aware of what’s going on. Strangely, though, in these situations we retain an intense feeling that we’re in control of what we’re doing, what can be called a sense of agency. So where does this feeling come from?

It certainly doesn’t come from having access to the brain processes that underlie our actions. After all, I have no insight into the electrochemical particulars of how my nerves are firing or how neurotransmitters are coursing through my brain and bloodstream. Instead, our experience of agency seems to come from inferences we make about the causes of our actions, based on crude sensory data. And, as with any kind of perception based on inference, our experience can be tricked.

(cut)

These observations point to a fundamental paradox about consciousness. We have the strong impression that we choose when we do and don’t act and, as a consequence, we hold people responsible for their actions. Yet many of the ways we encounter the world don’t require any real conscious processing, and our feeling of agency can be deeply misleading.

If our experience of action doesn’t really affect what we do in the moment, then what is it for? Why have it? Contrary to what many people believe, I think agency is only relevant to what happens after we act – when we try to justify and explain ourselves to each other.

The info is here.

Wednesday, October 9, 2019

Whistle-blowers act out of a sense of morality

Alice Walton
review.chicagobooth.edu
Originally posted September 16, 2019

Here is an excerpt:

To understand the factors that predict the likelihood of whistle-blowing, the researchers analyzed data from more than 42,000 participants in the ongoing Merit Principles Survey, which has polled US government employees since 1979, and which covers whistle-blowing. Respondents answer questions about their past experiences with unethical behavior, the approaches they’d take in dealing with future unethical behavior, and their personal characteristics, including their concern for others and their feelings about their organizations.

Concern for others was the strongest predictor of whistle-blowing, the researchers find. This was true both of people who had already blown the whistle on bad behavior and of people who expected they might in the future.

Loyalty to an immediate community—or ingroup, in psychological terms—was also linked to whistle-blowing, but in an inverse way. “The greater people’s concern for loyalty, the less likely they were to blow the whistle,” write the researchers. 

Organizational factors—such as people’s perceptions about their employer, their concern for their job, and their level of motivation or engagement—were largely unconnected to whether people spoke up. The only ones that appeared to matter were how fair people perceived their organization to be, as well as the extent to which the organization educated its employees about ways to expose bad behavior and the rights of whistle-blowers. The data suggest these two factors were linked to whether whistle-blowers opted to address the unethical behavior through internal or external avenues. 

The info is here.