Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, April 30, 2020

Suicide Mortality and Coronavirus Disease 2019—A Perfect Storm?

Reger MA, Stanley IH, Joiner TE.
JAMA Psychiatry. 
Published online April 10, 2020.
doi:10.1001/jamapsychiatry.2020.1060

Suicide rates have been rising in the US over the last 2 decades. The latest data available (2018) show the highest age-adjusted suicide rate in the US since 1941.1 It is within this context that coronavirus disease 2019 (COVID-19) struck the US. Concerning disease models have led to historic and unprecedented public health actions to curb the spread of the virus. Remarkable social distancing interventions have been implemented to fundamentally reduce human contact. While these steps are expected to reduce the rate of new infections, the potential for adverse outcomes on suicide risk is high. Actions could be taken to mitigate potential unintended consequences on suicide prevention efforts, which also represent a national public health priority.

COVID-19 Public Health Interventions and Suicide Risk

Secondary consequences of social distancing may increase the risk of suicide. It is important to consider changes in a variety of economic, psychosocial, and health-associated risk factors.

Economic Stress

There are fears that the combination of canceled public events, closed businesses, and shelter-in-place strategies will lead to a recession. Economic downturns are usually associated with higher suicide rates compared with periods of relative prosperity.2 Since the COVID-19 crisis, businesses have faced adversity and laying off employees. Schools have been closed for indeterminable periods, forcing some parents and guardians to take time off work. The stock market has experienced historic drops, resulting in significant changes in retirement funds. Existing research suggests that sustained economic stress could be associated with higher US suicide rates in the future.

Social Isolation

Leading theories of suicide emphasize the key role that social connections play in suicide prevention. Individuals experiencing suicidal ideation may lack connections to other people and often disconnect from others as suicide risk rises.3 Suicidal thoughts and behaviors are associated with social isolation and loneliness.3 Therefore, from a suicide prevention perspective, it is concerning that the most critical public health strategy for the COVID-19 crisis is social distancing. Furthermore, family and friends remain isolated from individuals who are hospitalized, even when their deaths are imminent. To the extent that these strategies increase social isolation and loneliness, they may increase suicide risk.

The info is here.

Difficult Conversations: Navigating the Tension between Honesty and Benevolence

E. Levine, A. Roberts, & T. Cohen
PsyArXiv
Originally published 18 Jul 19

Abstract

Difficult conversations are a necessary part of everyday life. To help children, employees, and partners learn and improve, parents, managers, and significant others are frequently tasked with the unpleasant job of delivering negative news and critical feedback. Despite the long-term benefits of these conversations, communicators approach them with trepidation, in part, because they perceive them as involving intractable moral conflict between being honest and being kind. In this article, we review recent research on egocentrism, ethics, and communication to explain why communicators overestimate the degree to which honesty and benevolence conflict during difficult conversations, document the conversational missteps people make as a result of this erred perception, and propose more effective conversational strategies that honor the long-term compatibility of honesty and benevolence. This review sheds light on the psychology of moral tradeoffs in conversation, and provides practical advice on how to deliver unpleasant information in ways that improve recipients’ welfare.

From the Summary:

Difficult conversations that require the delivery of negative information from communicators to targets involve perceived moral conflict between honesty and benevolence. We suggest that communicators exaggerate this conflict. By focusing on the short-term harm and unpleasantness associated with difficult conversations, communicators fail to realize that honesty and benevolence are actually compatible in many cases. Providing honest feedback can help a target to learn and grow, thereby improving the target’s overall welfare. Rather than attempting to resolve the honesty-benevolence dilemma via communication strategies that focus narrowly on the short-term conflict between honesty and emotional harm, we recommend that communicators instead invoke communication strategies that integrate and maximize both honesty and benevolence to ensure that difficult conversations lead to long-term welfare improvements for targets. Future research should explore the traits, mindsets, and contexts that might facilitate this approach. For example, creative people may be more adept at integrative solutions to the perceived honesty-dilemma conflict, and people who are less myopic and more cognizant of the future consequences of their choices may be better at recognizing the long-term benefits of honesty.

The info is here.

This research has relevance to psychotherapy.

Wednesday, April 29, 2020

Physician at Epicenter of COVID-19 Crisis Lost to Suicide

Dr. Lorna Breem
Marcia Frellick
MedScape.com
Originally published 28 April 20

Grief-laden posts are coursing through social media following the suicide on Sunday of emergency department physician Lorna M. Breen, MD, who had been immersed in treating COVID-19 patients at the epicenter of the disease in New York City.

Breen, 49, was the medical director of the ED at NewYork-Presbyterian Allen Hospital in Manhattan.

According to a New York Times report, her father, Dr Philip C. Breen, of Charlottesville, Virginia, said his daughter did not have a history of mental illness but had described wrenching scenes, including that patients "were dying before they could even be taken out of ambulances."

The report said Lorna Breen had also contracted the virus but had returned to work after recovering for about 10 days.

Her father told the Times that when he last spoke with her, she seemed "detached" and he knew something was wrong.

"The hospital sent her home again, before her family intervened to bring her to Charlottesville," the elder Breen told the newspaper.

The article indicated that Charlottesville police officers on Sunday responded to a call and Breen was taken to University of Virginia Hospital, where she died from self-inflicted injuries.

The info is here.

Characteristics of Faculty Accused of Academic Sexual Misconduct in the Biomedical and Health Sciences

Espinoza M, Hsiehchen D.
JAMA. 2020;323(15):1503–1505.
doi:10.1001/jama.2020.1810

Abstract

Despite protections mandated in educational environments, unwanted sexual behaviors have been reported in medical training. Policies to combat such behaviors need to be based on better understanding of the perpetrators. We characterized faculty accused of sexual misconduct resulting in institutional or legal actions that proved or supported guilt at US higher education institutions in the biomedical and health sciences.

Discussion

Of biomedical and health sciences faculty accused of sexual misconduct resulting in institutional or legal action, a majority were full professors, chairs or directors, or deans. Sexual misconduct was rarely an isolated event. Accused faculty frequently resigned or remained in academics, and few were sanctioned by governing boards.

Limitations include that only data on accused faculty who received media attention or were involved in legal proceedings were captured. In addition, the duration of behaviors, the exact number of targets, and the outcome data could not be identified for all accused faculty. Thus, this study cannot determine the prevalence of faculty who commit sexual misconduct, and the characteristics may not be generalizable across institutions.

The lack of transparency in investigations suggests that misconduct behaviors may not have been wholly captured by the public documents. Efforts to eliminate nondisclosure agreements are needed to enhance transparency. Further work is needed on mechanisms to prevent sexual misconduct at teaching institutions.

The info is here.

Tuesday, April 28, 2020

Athletes often don’t know what they’re talking about (Apparently, neither do Presidents)

Cathal Kelly
The Globe and Mail
Originally posted 20 April 20

Here is an excerpt:

This is what happens when we depend on celebrities to amplify good advice. The ones who have bad advice will feel similarly empowered. You can see where this particular case slid off the rails.

Djokovic has spent years trying to curate an identity as a sports brand. Early on, he tried the Tiger Beat route, a la Rafael Nadal. When that didn’t work, he tried haughty and detached, a la Roger Federer. Same result.

Some time around 2010, Djokovic decided to go Full Weirdo. He gave up gluten, got into cosmology and decided to present himself as a sort of seeker of universal truths. He even let everyone know that he’d been visiting a Buddhist temple during Wimbledon because … well, who knows what enlightenment and winning at tennis have to do with each other?

Nobody really got his new act, but this switch happened to coincide with Djokovic’s rise to the top. So he stuck with it.

This went hand in hand with an irrepressibly chirpy public persona, one so calculatedly ingratiating that it often had the opposite effect.

It wasn’t a terrible strategy. Highly successful sporting oddbods usually become cult stars. If they hang on long enough, they find general acceptance.

But it didn’t turn out for Djokovic. Even now that he is arguably the greatest men’s player of all time, he still can’t manage the trick. There’s just something about the guy that seems a bit not-of-this-world.

The info is here.

What needs to happen before your boss can make you return to work

Mark Kaufman
www.mashable.com
Originally posted 24 April 20

Here is an excerpt:

But, there is a way for tens of millions of Americans to return to workplaces while significantly limiting how many people infect one another. It will require extraordinary efforts on the part of both employers and governments. This will feel weird, at first: Imagine regularly having your temperature taken at work, routinely getting tested for an infection or immunity, mandatory handwashing breaks, and perhaps even wearing a mask.

Yet, these are exceptional times. So restarting the economy and returning to workplace normalcy will require unparalleled efforts.

"This is truly unprecedented," said Christopher Hayes, a labor historian at the Rutgers School of Management and Labor Relations.

"This is like the 1918 flu and the Great Depression at the same time," Hayes said.

Yet unlike previous recessions and depressions over the last 100 years, most recently the Great Recession of 2008-2009, American workers must now ask themselves an unsettling question: "People now have to worry, ‘Is it safe to go to this job?’" said Hayes.

Right now, many employers aren't nearly prepared to tell workers in the U.S. to return to work and office spaces. To avoid infection, "the only tools you’ve got in your toolbox are the simple but hard-to-sustain public health tools like testing, contact tracing, and social distancing," explained Michael Gusmano, a health policy expert at the Rutgers School of Public Health.

"We’re not anywhere near a situation where you could claim that you can, with any credibility, send people back en masse now," Gusmano said.

The info is here.

Monday, April 27, 2020

Drivers are blamed more than their automated cars when both make mistakes

Awad, E., Levine, S., Kleiman-Weiner, M. et al.
Nat Hum Behav 4, 134–143 (2020).
https://doi.org/10.1038/s41562-019-0762-8

Abstract

When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human–machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.

From the Discussion:

Our central finding (diminished blame apportioned to the machine in dual-error cases) leads us to believe that, while there may be many psychological barriers to self-driving car adoption19, public over-reaction to dual-error cases is not likely to be one of them. In fact, we should perhaps be concerned about public underreaction. Because the public are less likely to see the machine as being at fault in dual-error cases like the Tesla and Uber crashes, the sort of public pressure that drives regulation might be lacking. For instance, if we were to allow the standards for automated vehicles to be set through jury-based court-room decisions, we expect that juries will be biased to absolve the car manufacturer of blame in dual-error cases, thereby failing to put sufficient pressure on manufacturers to improve car designs.

The article is here.

Experiments on Trial

Hannah Fry
The New Yorker
Originally posted 24 Feb 20

Here are two excerpts:

There are also times when manipulation leaves people feeling cheated. For instance, in 2018 the Wall Street Journal reported that Amazon had been inserting sponsored products in its consumers’ baby registries. “The ads look identical to the rest of the listed products in the registry, except for a small gray ‘Sponsored’ tag,” the Journal revealed. “Unsuspecting friends and family clicked on the ads and purchased the items,” assuming they’d been chosen by the expectant parents. Amazon’s explanation when confronted? “We’re constantly experimenting,” a spokesperson said. (The company has since ended the practice.)

But there are times when the experiments go further still, leaving some to question whether they should be allowed at all. There was a notorious experiment run by Facebook in 2012, in which the number of positive and negative posts in six hundred and eighty-nine thousand users’ news feeds was tweaked. The aim was to see how the unwitting participants would react. As it turned out, those who saw less negative content in their feeds went on to post more positive stuff themselves, while those who had positive posts hidden from their feeds used more negative words.

A public backlash followed; people were upset to discover that their emotions had been manipulated. Luca and Bazerman argue that this response was largely misguided. They point out that the effect was small. A person exposed to the negative news feed “ended up writing about four additional negative words out of every 10,000,” they note. Besides, they say, “advertisers and other groups manipulate consumers’ emotions all the time to suit their purposes. If you’ve ever read a Hallmark card, attended a football game or seen a commercial for the ASPCA, you’ve been exposed to the myriad ways in which products and services influence consumers’ emotions.”

(cut)

Medicine has already been through this. In the early twentieth century, without a set of ground rules on how people should be studied, medical experimentation was like the Wild West. Alongside a great deal of good work, a number of deeply unethical studies took place—including the horrifying experiments conducted by the Nazis and the appalling Tuskegee syphilis trial, in which hundreds of African-American men were denied treatment by scientists who wanted to see how the lethal disease developed. As a result, there are now clear rules about seeking informed consent whenever medical experiments use human subjects, and institutional procedures for reviewing the design of such experiments in advance. We’ve learned that researchers aren’t always best placed to assess the potential harm of their work.

The info is here.

Sunday, April 26, 2020

Donald Trump: a political determinant of covid-19

Gavin Yamey and Greg Gonsalves
BMJ 2020; 369  (Published 24 April 2020)
doi: https://doi.org/10.1136/bmj.m1643

He downplayed the risk and delayed action, costing countless avertable deaths

On 23 January 2020, the World Health Organization told all governments to get ready for the transmission of a novel coronavirus in their countries. “Be prepared,” it said, “for containment, including active surveillance, early detection, isolation and case management, contact tracing and prevention of onward spread.” Some countries listened. South Korea, for example, acted swiftly to contain its covid-19 epidemic. But US President Donald Trump was unmoved by WHO’s warning, downplaying the threat and calling criticisms of his failure to act “a new hoax.”

Trump’s anaemic response led the US to become the current epicentre of the global covid-19 pandemic, with almost one third of the world’s cases and a still rising number of new daily cases.4 In our interconnected world, the uncontrolled US epidemic has become an obstacle to tackling the global pandemic. Yet the US crisis was an avertable catastrophe.

Dismissing prescient advice on pandemic preparedness from the outgoing administration of the former president, Barack Obama, the Trump administration went on to weaken the nation’s pandemic response capabilities in multiple ways. In May 2018, it eliminated the White House global health security office that Obama established after the 2014-16 Ebola epidemic to foster cross-agency pandemic preparedness. In late 2019, it ended a global early warning programme, PREDICT, that identified viruses with pandemic potential. There were also cuts to critical programmes at the Centers for Disease Control and Prevention (CDC), part and parcel of Trump’s repeated rejections of evidence based policy making for public health.

Denial
After the US confirmed its first case of covid-19 on 22 January 2020, Trump responded with false reassurances, delayed federal action, and the denigration of science. From January to mid-March, he denied that the US faced a serious epidemic risk, comparing the threat to seasonal influenza. He repeatedly reassured Americans that they had nothing to worry about, telling the public: “We think it's going to have a very good ending for us” (30 January), “We have it very much under control in this country” (23 February),
and “The virus will not have a chance against us. No nation is more prepared, or more resilient, than the United States” (11 March).

The info is here.

Saturday, April 25, 2020

Punitive but discriminating: Reputation fuels ambiguously-deserved punishment but also sensitivity to moral nuance

Jordan, J., & Kteily, N.
(2020, March 21).
https://doi.org/10.31234/osf.io/97nhj

Abstract

Reputation concerns can motivate moralistic punishment, but existing evidence comes exclusively from contexts in which punishment is unambiguously deserved. Recent debates surrounding “virtue signaling” and “outrage culture” raise the question of whether reputation may also fuel punishment in more ambiguous cases—and even encourage indiscriminate punishment that ignores moral nuance. But when the moral case for punishment is ambiguous, do people actually expect punishing to make them look good? And if so, are people willing to use ambiguously-deserved punishment to gain reputational benefits, or do personal reservations about whether punishment is merited restrain them from doing so? We address these questions across 11 experiments (n = 9448) employing both hypothetical vignette and costly behavioral paradigms. We find that reputation does fuel ambiguously-deserved punishment. Subjects expect even ambiguously-deserved punishment to look good, especially when the audience is highly ideological. Furthermore, despite personally harboring reservations about its morality, subjects readily use ambiguously-deserved punishment to gain reputational benefits. Yet we also find that reputation can do more to fuel unambiguously-deserved punishment. Subjects robustly expect unambiguously-deserved punishment to look better than ambiguously-deserved punishment, even when the audience is highly ideological. And we find evidence that as a result, introducing reputational incentives can preferentially increase unambiguously-deserved punishment—causing punishers to differentiate more between ambiguous and unambiguous cases and thereby heightening sensitivity to moral nuance. We thus conclude that the drive to signal virtue can make people more punitive but also more discriminating, painting a nuanced picture of the role that reputation plays in outrage culture.

From the Discussion:

Here, we have provided a novel framework for understanding the influence of reputational incentives on moralistic punishment in ambiguous and unambiguous cases.By looking beyond contexts in which punishment is unambiguously merited, and by considering the important role of audience ideology,our work fills critical theoretical gaps in our understanding of the human moral psychology surrounding punishment and reputation. Our findings also speak directly to concerns raised by critics of “outrage culture”, who have suggested that “virtue signaling” fuels ambiguously-deserved punishment and even encourages indiscriminate punishment that ignores moral nuance, thereby contributing to negative societal outcomes(e.g., by unfairly harming alleged perpetrators and chilling social discourse). More specifically, our results present a complex portrait of the role that reputation plays in outrage culture, lending credence to some concerns about virtue signaling but casting doubt on others.

Friday, April 24, 2020

COVID-19 Is Making Moral Injury to Physicians Much Worse

Wendy Dean
Medscape.com
Originally published 1 April 20

Here is an excerpt:

Moral injury is also coming to the forefront as physicians consider rationing scarce resources with too little guidance. Which surgeries truly justify use of increasingly scarce PPE? A cardiac valve replacement? A lumpectomy? Repairing a torn ligament?

Each denial has profound impact on both the patients whose surgeries are delayed and the clinicians who decide their fates. Yet worse decisions may await clinicians. If, for example, New York City needs an additional 30,000 ventilators but receives only 500, physicians will be responsible for deciding which 29,500 patients will not be ventilated, virtually assuring their demise.

How will physicians make those decisions? How will they cope? The situation of finite resources will force an immediate pivot to assessing patients according to not only their individual needs but also to society's need for that patient's contribution. It will be a wrenching restructuring.

Here are the essential principles for mitigating the impact of moral injury in the context of COVID-19. (They are the same as recommendations in the time before COVID-19.)

1. Value physicians

a. Physicians are putting everything on the line. They're walking into a wildfire of a pandemic, wearing pajamas, with a peashooter in their holster. That takes a monumental amount of courage and deserves profound respect.

The info is here.

Sexual attractions, behaviors, and boundary crossings between sport psychology professionals and their athlete-clients

Tess Palmateer & Trent Petrie
Journal of Applied Sport Psychology 
https://doi.org/10.1080/10413200.2020.1728422

Abstract

Participants were 181 sport performance professionals (SPPs); 92 reported being sexually attracted to their athlete-clients (ACs), though few SPPs sought supervision regarding such attractions. In regards to specific behaviors, approximately half reported discussing personal matters unrelated to their work, whereas far fewer had engaged in sexual behaviors with their ACs, such as discussing sexual matters unrelated to their work, and caressing or intimately touching an AC. Common nonsexual boundary crossings (NSBCs) included consulting with an AC in public places, working with an AC at practice, and working with an AC at a competition. Sexual attractions exist and NSBCs occur, thus SPPs need to be trained in these issues to be able to successfully navigate them.

Lay summary: About half of the sport psychology professionals (SPPs) reported being sexually attracted to an athlete-client (AC). Typical boundary crossings included: consulting with an AC in public and private places and travelling with ACs. Therefore SPPs’ should be ethically trained and seek supervision to effectively work with such attractions.

Thursday, April 23, 2020

We Tend To See Acts We Disapprove Of As Deliberate

Jesse Singal
BPS
Research Digest
Originally published 14 April 20

One of the most important and durable findings in moral and political psychology is that there is a tail-wags-the-dog aspect to human morality. Most of us like to think we have carefully thought-through, coherent moral systems that guide our behaviour and judgments. In reality our behaviour and judgments often stem from gut-level impulses, and only after the fact do we build elaborate moral rationales to justify what we believe and do.

A new paper in the Journal of Personality and Social Psychology examines this issue through a fascinating lens: free will. Or, more specifically, via people’s judgments about how much free will others had when committing various transgressions. The team, led by Jim A. C. Everett of the University of Kent and Cory J. Clark of Durham University, ran 14 studies geared at evaluating the possibility that at least some of the time the moral tail wags the dog: first people decide whether someone is blameworthy, and then judge how much free will they have, in a way that allows them to justify blaming those they want to blame and excusing those they want to excuse.

The researchers examined this hypothesis, for which there is already some evidence, through the lens of American partisan politics. In the paper they note that previous research has shown that conservatives have a greater belief in free will than liberals, and are more moralising in general (that is, they categorise a larger number of acts as morally problematic, and rely on a greater number of principles — or moral foundations — in making these judgments). The first two of the new studies replicated these findings — this is consistent with the idea, put simply, that conservatives believe in free will more because it allows them to level more moral judgments.

The info is here.

Universalization Reasoning Guides Moral Judgment

Levine, S., Kleiman-Weiner, M., and others
(2020, February 23).
https://doi.org/10.31234/osf.io/p7e6h

Abstract

To explain why an action is wrong, we sometimes say: “What if everybody did that?” In other words, even if a single person’s behavior is harmless, that behavior may be wrong if it would be harmful once universalized. We formalize the process of universalization in a computational model, test its quantitative predictions in studies of human moral judgment, and distinguish it from alternative models. We show that adults spontaneously make moral judgments consistent with the logic of universalization, and that children show a comparable pattern of judgment as early as 4 years old. We conclude that alongside other well-characterized mechanisms of moral judgment, such as outcome-based and rule-based thinking, the logic of universalizing holds an important place in our moral minds.

From the Discussion:

Across five studies, we show that both adults and children sometimes make moral judgments well described by the logic of universalization,  and not by standard outcome, rule or norm-based models of moral judgment.We model participants’ judgment of the moral accept-ability  of  an  action  as  proportional  to  the  change  in expected utility in the hypothetical world where all interested parties feel free to do the action.  This model accounts for the ways in which moral judgment is sensitive to the number of parties hypothetically interested in an action, the threshold at which harmful outcomes occur, and their interaction.  By incorporating data on participants’ subjectively perceived utility functions we can predict their moral judgments of threshold problems with quantitative precision, further validating our pro-posed computational model.

The research is here.

Wednesday, April 22, 2020

Your Code of Conduct May Be Sending the Wrong Message

F. Gino, M, Kouchaki, & Y. Feldman
Harvard Business Review
Originally posted 13 March 20


Here is an excerpt:

We examined the relationship between the language used (personal or impersonal) in these codes and corporate illegality. Research assistants blind to our research questions and hypotheses coded each document based on the degree to which it used “we” or “member/employee” language. Next, we searched media sources for any type of illegal acts these firms may have been involved in, such as environmental violations, anticompetitive actions, false claims, and fraudulent actions. Our analysis showed that firms that used personal language in their codes of conduct were more likely to be found guilty of illegal behaviors.

We found this initial evidence to be compelling enough to dig further into the link between personal “we” language and unethical behavior. What would explain such a link? We reasoned that when language communicating ethical standards is personal, employees tend to assume they are part of a community where members are easygoing, helpful, cooperative, and forgiving. By contrast, when the language is impersonal — for example, “organizational members are expected to put customers first” — employees feel they are part of a transactional relationship in which members are more formal and distant.

Here’s the problem: When we view our organization as tolerant and forgiving, we believe we’re less likely to be punished for misconduct. Across nine different studies, using data from lab- and field-based experiments as well as a large dataset of S&P firms, we find that personal language (“we,” “us”) leads to less ethical behavior than impersonal language (“employees,” “members”) does, apparently because people encountering more personal language believe their organization is less serious about punishing wrongdoing.

The info is here.

Ethics deserve a starring role in business dealings

Barbara Lang
bizjournals.com
Originally published 5 March 20

Ethics deserve a starring role in business dealings

They created cultures of fear, deception and arrogance, and they put their own personal interests in front of all others, including their own families. They didn’t care whose lives they destroyed, using their power to conquer and destroy anyone blocking their path to money and gratification. Shockingly, they manipulated those around them — people with whom they built trust — to foster networks of secrecy and allegiance beyond anything we have seen in the history of American business. Ironically, they crucified themselves through historic cheating, lying and a breakdown of ethics never seen before.

Many are household names, and we should all cringe when we hear them, even as they are reduced to insignificance and confined to moldy jail cells. Ken Lay, CEO and chairman of Enron, was the mastermind of a historic accounting scandal at the energy company, resulting in its bankruptcy. He was found guilty of 10 counts of securities fraud before he died in 2006. There are also the two infamous Bernies: Ebbers and Madoff. Ebbers, the former WorldCom CEO, was convicted of securities fraud and conspiracy as part of that company’s false financial reporting scandal. Maybe the most egregious and sinister of them all was Madoff, whose Ponzi scheme defrauded innocent investors of millions of dollars and life savings. He rots in federal prison while his clients try to make sense of the destruction he knowingly caused.

The info is here.

Tuesday, April 21, 2020

When Google and Apple get privacy right, is there still something wrong?

Tamar Sharon
Medium.com
Originally posted 15 April 20

Here is an excerpt:

As the understanding that we are in this for the long run settles in, the world is increasingly turning its attention to technological solutions to address the devastating COVID-19 virus. Contact-tracing apps in particular seem to hold much promise. Using Bluetooth technology to communicate between users’ smartphones, these apps could map contacts between infected individuals and alert people who have been in proximity to an infected person. Some countries, including China, Singapore, South Korea and Israel, have deployed these early on. Health authorities in the UK, France, Germany, the Netherlands, Iceland, the US and other countries, are currently considering implementing such apps as a means of easing lock-down measures.

There are some bottlenecks. Do they work? The effectiveness of these applications has not been evaluated — in isolation or as part of an integrated strategy. How many people would need to use them? Not everyone has a smartphone. Even in rich countries, the most vulnerable group, aged over 80, is least likely to have one. Then there’s the question about fundamental rights and liberties, first and foremost privacy and data protection. Will contact-tracing become part of a permanent surveillance structure in the prolonged “state of exception” we are sleep-walking into?

Prompted by public discussions about this last concern, a number of European governments have indicated the need to develop such apps in a way that would be privacy preserving, while independent efforts involving technologists and scientists to deliver privacy-centric solutions have been cropping up. The Pan-European Privacy-Preserving Tracing Initiative (PEPP-IT), and in particular the Decentralised Privacy-Preserving Proximity Tracing (DP-3T) protocol, which provides an outline for a decentralised system, are notable forerunners. Somewhat late in the game, the European Commission last week issued a Recommendation for a pan-European approach to the adoption of contact-tracing apps that would respect fundamental rights such as privacy and data protection.

The info is here.

Piercing the Smoke Screen: Dualism, Free Will, and Christianity

S. Murray, E. Murray, & T. Nadelhoffer
PsyArXiv Preprints
Originally created on 18 Feb 20

Abstract

Research on the folk psychology of free will suggests that people believe free will is incompatible with determinism and that human decision-making cannot be exhaustively characterized by physical processes. Some suggest that certain elements of Western cultural history, especially Christianity, have helped to entrench these beliefs in the folk conceptual economy. Thus, on the basis of this explanation, one should expect to find three things: (1) a significant correlation between belief in dualism and belief in free will, (2) that people with predominantly incompatibilist commitments are likely to exhibit stronger dualist beliefs than people with predominantly compatibilist commitments, and (3) people who self-identify as Christians are more likely to be dualists and incompatibilists than people who do not self-identify as Christians. We present the results of two studies (n = 378) that challenge two of these expectations. While we do find a significant correlation between belief in dualism and belief in free will, we found no significant difference in dualist tendencies between compatibilists and incompatibilists. Moreover, we found that self-identifying as Christian did not significantly predict preference for a particular metaphysical conception of free will. This calls into question assumptions about the relationship between beliefs about free will, dualism, and Christianity.

The research is here.

Monday, April 20, 2020

How Becoming a Doctor Made Me a Worse Listener

Adeline Goss
JAMA. 2020;323(11):1041-1042.
doi:10.1001/jama.2020.2051

Here is an excerpt:

And I hadn’t noticed. Maybe that was because I was still connecting to patients. I still choked up when they cried, felt joy when they rejoiced, felt moved by and grateful for my work, and generally felt good about the care I was providing.

But as I moved through my next days in clinic, I began to notice the unconscious tricks I had developed to maintain a connection under time pressure. A whole set of expressions played out across my face during history taking—nonverbal concern, nonverbal gentleness, nonverbal apology—a time-efficient method of conveying empathy even when I was asking directed questions, controlling the type and volume of information I received, and, at times, interrupting. Sometimes I apologized to patients for my style of interviewing, explaining that I wanted to make sure I understood things clearly so that I could treat them. I apologized because I didn’t like communicating this way. I can’t imagine it felt good to them.

What’s strange is that, at the end of these visits, patients often thanked me for my concern and detail-orientedness. They may have interpreted my questioning as a sign that I was interested. But was I?

Interest is a multilayered concept in medicine. I care about patients, and I am interested in their stories in the sense that they contain the information I need to make the best possible decisions for their care. Interest motivates doctors to take a detailed history, review the chart, and analyze the literature. Interest leads to the correct diagnosis and treatment. Residency rewards this kind of interest. Perhaps as a result, looking around at my co-residents, it’s in abundant supply, even when time is tight.

The info is here.

Europe plans to strictly regulate high-risk AI technology

Nicholas Wallace
sciencemag.org
Originally published 19 Feb 20

Here is an excerpt:

The commission wants binding rules for “high-risk” uses of AI in sectors like health care, transport, or criminal justice. The criteria to determine risk would include considerations such as whether someone could get hurt—by a self-driving car or a medical device, for example—or whether a person has little say in whether they’re affected by a machine’s decision, such as when AI is used in job recruitment or policing.

For high-risk scenarios, the commission wants to stop inscrutable “black box” AIs by requiring human oversight. The rules would also govern the large data sets used in training AI systems, ensuring that they are legally procured, traceable to their source, and sufficiently broad to train the system. “An AI system needs to be technically robust and accurate in order to be trustworthy,” the commission’s digital czar Margrethe Vestager said at the press conference.

The law will also establish who is responsible for an AI system’s actions—such as the company using it, or the company that designed it. High-risk applications would have to be shown to be compliant with the rules before being deployed in the European Union.

The commission also plans to offer a “trustworthy AI” certification, to encourage voluntary compliance in low-risk uses. Certified systems later found to have breached the rules could face fines.

The info is here.

Sunday, April 19, 2020

On the ethics of algorithmic decision-making in healthcare

Grote T, Berens P
Journal of Medical Ethics 
2020;46:205-211.

Abstract

In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical diagnosis, it comes at the expense of opacity when trying to assess the reliability of given diagnosis. Drawing on literature in social epistemology and moral responsibility, we argue that the uncertainty in question potentially undermines the epistemic authority of clinicians. Furthermore, we elucidate potential pitfalls of involving machine learning in healthcare with respect to paternalism, moral responsibility and fairness. At last, we discuss how the deployment of machine learning algorithms might shift the evidentiary norms of medical diagnosis. In this regard, we hope to lay the grounds for further ethical reflection of the opportunities and pitfalls of machine learning for enhancing decision-making in healthcare.

From the Conclusion

In this paper, we aimed at examining which opportunities and pitfalls machine learning potentially provides to enhance of medical decision-making on epistemic and ethical grounds. As should have become clear, enhancing medical decision-making by deferring to machine learning algorithms requires trade-offs at different levels. Clinicians, or their respective healthcare institutions, are facing a dilemma: while there is plenty of evidence of machine learning algorithms outsmarting their human counterparts, their deployment comes at the costs of high degrees of uncertainty. On epistemic grounds, relevant uncertainty promotes risk-averse decision-making among clinicians, which then might lead to impoverished medical diagnosis. From an ethical perspective, deferring to machine learning algorithms blurs the attribution of accountability and imposes health risks to patients. Furthermore, the deployment of machine learning might also foster a shift of norms within healthcare. It needs to be pointed out, however, that none of the issues we discussed presents a knockout argument against deploying machine learning in medicine, and our article is not intended this way at all. On the contrary, we are convinced that machine learning provides plenty of opportunities to enhance decision-making in medicine.

The article is here.

Saturday, April 18, 2020

Experimental Philosophical Bioethics

Brain Earp, and others
AJOB Empirical Bioethics (2020), 11:1, 30-33
DOI: 10.1080/23294515.2020.1714792

There is a rich tradition in bioethics of gathering empirical data to inform, supplement, or test the implications of normative ethical analysis. To this end, bioethicists have drawn on diverse methods, including qualitative interviews, focus groups, ethnographic studies, and opinion surveys to advance understanding of key issues in bioethics. In so doing, they have developed strong ties with neighboring disciplines such as anthropology, history, law, and sociology.  Collectively, these lines of research have flourished in the broader field of “empirical bioethics” for more than 30 years (Sugarman and Sulmasy 2010).

More recently, philosophers from outside the field of bioethics have similarly employed empirical
methods—drawn primarily from psychology, the cognitive sciences, economics, and related disciplines—to advance theoretical debates. This approach, which has come to be called experimental philosophy (or x-phi), relies primarily on controlled experiments to interrogate the concepts, intuitions, reasoning, implicit mental processes, and empirical assumptions about the mind that play a role in traditional philosophical arguments (Knobe et al. 2012). Within the moral domain, for example, experimental philosophy has begun to contribute to long-standing debates about the nature of moral judgment and reasoning; the sources of our moral emotions and biases; the qualities of a good person or a good life; and the psychological basis of moral theory itself (Alfano, Loeb, and Plakias 2018). We believe that experimental philosophical bioethics—or “bioxphi”—can similarly contribute to bioethical scholarship and debate.1 Here, we introduce this emerging discipline, explain how it is distinct from empirical bioethics more broadly construed, and attempt to characterize how it might advance theory and practice in this area.

The paper is here.

Friday, April 17, 2020

Trump's Claims Are Dangerous: COVID-19 & Hydroxychloroquine

Andre Picard
Globe and Mail
Originally published 9 April 20

Here is an excerpt:

The principal argument the President has used in support of hydroxychloroquine is the rhetorical statement: “What do we have to lose?” (He repeated that phrase five times at his Saturday media briefing.) “I’m not a doctor but I have common sense,” Mr. Trump added.

“Common sense” is not evidence. And “what have we got to lose?” is certainly no way to practise medicine – or policy-making for that matter.

Physicians in China started using hydroxychloroquine to treat COVID-19 patients early in the pandemic. There was certainly some logic to this move. The drug has antiviral properties and showed some promise in vitro but that doesn’t mean it will work in vivo.

It remains a desperation drug, something to try when the rest of the very limited armamentarium has been exhausted.

The evidence of benefit in patients is mostly anecdotal, based on highly publicized but scientifically weak studies. Controversial microbiologist Didier Raoult has made wild claims about the effectiveness of hydroxychloroquine but his study, published in the International Journal of Antimicrobial Agents, is little more than anecdotal.

Similarly, Vladimir Zelenko, a small-town doctor in New York State, has gained internet fame promoting a cocktail of three drugs – hydroxychloroquine, the antibiotic azithromycin and zinc sulphate. There is no real evidence for claims that he has cured hundreds of cases of COVID-19, but that hasn’t stopped Mr. Trump from promoting the regimen.

There needs to be proper studies done, with control groups – meaning one group gets the drug(s) and the other does not, and the outcomes are compared. Like it or not, that takes time.

Impatience is not an excuse to make unsubstantiated claims.

The info is here.

Toward equipping Artificial Moral Agents with multiple ethical theories

George Rautenbach and C. Maria Keet
arXiv:2003.00935v1 [cs.CY] 2 Mar 2020

Abstract

Artificial Moral Agents (AMA’s) is a field in computer science with the purpose of creating autonomous machines that can make moral decisions akin to how humans do. Researchers have proposed theoretical means of creating such machines, while philosophers have made arguments as to how these machines ought to behave, or whether they should even exist.

Of the currently theorised AMA’s, all research and design has been done with either none or at most one specified normative ethical theory as basis. This is problematic because it narrows down the AMA’s functional ability and versatility which in turn causes moral outcomes that a limited number of people agree with (thereby undermining an AMA’s ability to be moral in a human sense). As solution we design a three-layer model for general normative ethical theories that can be used to serialise the ethical views of people and businesses for an AMA to use during reasoning. Four specific ethical norms (Kantianism, divine command theory, utilitarianism, and egoism) were modelled and evaluated as proof of concept for normative modelling. Furthermore, all models were serialised to XML/XSD as proof of support for computerisation.

From the Discussion:

A big philosophical grey area in AMA’s is with regards to agency. That is, an entity’s ability to
understand available actions and their moral values and to freely choose between them. Whether
or not machines can truly understand their decisions and whether they can be held accountable
for them is a matter of philosophical discourse. Whatever the answer may be, AMA agency
poses a difficult question that must be addressed.

The question is as follows: should the machine act as an agent itself, or should it act as an informant for another agent? If an AMA reasons for another agent (e.g., a person) then reasoning will be done with that person as the actor and the one who holds responsibility. This has the disadvantage of putting that person’s interest before other morally considerable entities, especially with regards to ethical theories like egoism. Making the machine the moral agent has the advantage of objectivity where multiple people are concerned, but makes it harder to assign blame for its actions - a machine does not care for imprisonment or even disassembly. A Luddite would say it has no incentive to do good to humanity. Of course, a deterministic machine does not need incentive at all, since it will always behave according to the theory it is running. This lack of fear or “personal interest” can be good, because it ensures objective reasoning and fair consideration of affected parties.

The paper is here.

Thursday, April 16, 2020

A Test: Can You Make Morally Mature Choices In A Crisis?

Rob Asghar
Forbes.com
Originally posted 10 April 20

Here is an excerpt:

Crisis Ethics in a COVID-19 Context

Of course, SARS-Cov-2 and the rise of the COVID-19 threat have sharpened the issues and heightened the stakes.

At the moment, we do have a global near-consensus on many things: Stay at home. Conduct your religious gatherings online. Do what you can to protect your family's health and that of others.

But the consensus quickly breaks down. How long can we truly afford to do this, especially given evidence that the virus returns once stricter measures are relaxed? How do we judge the misery caused by the virus against other impending miseries? Will an entire generation be economically shattered?

Here, a number of values grind against one another.

For a good number of idealists, sentimentalists and technocrats, it's inconceivable that society could do anything other than to shutter for as long as necessary to prevent further coronavirus spread. Anything else reveals utter contempt for the elderly and the vulnerable. They argue that lockdowns must be draconian and extended, because those countries that initially had success containing the virus witnessed new outbreaks as soon as they loosened restrictions.

Utilitarians, pushing back, raise concerns about how lockdowns have unintended consequences that grow more dangerous over time. Idealists and technocrats tend to dismiss them as Fox News-addicted ogres who are all too eager to dig a grave for Grandma in order to protect their precious stock portfolios.

But at some point, painful realities do have to be reckoned with. As Liz Alderman wrote in the New York Times recently, European officials are walking a high wire in their efforts to provide massive relief efforts. "European leaders are wary of relaunching the economy before the epidemic is proved to be under control," Alderman wrote. "The tsunami of fiscal support by France and its neighbors — over €2 trillion in spending and loan guarantees combined — can be sustained only a few months, economists say."

The info is here.

How To Move From Data Privacy To Data Ethics

Photo:Thomas Walle
forbes.com
Originally posted 11 March 20


Here is an excerpt:

Data Ethics Is Up To Each And Every Company

Data ethics, however, is more nuanced and complicated. It's up to each company to decide what use cases their collected data should support or not. There are no federal or state laws related to data ethics, and there are no government-owned bodies that will penalize the ones that cross the ethical boundaries of how data should and should not be used.

However, in the growing data industry, which is composed of those helping companies and individuals to make better decisions, there’s a constant influx of new data being generated and collected, such as health data, car driving data and location data, to name a few. These data sets and insights are new to the market, and I believe we will start to see the first wave of forward-looking data companies taking a clear stance and drawing their own ethical guidelines.

These are companies that acknowledge the responsibility they have when holding such information and want to see it be used for the right use cases -- to make people’s lives better, easier and safer. So, if you agree that data ethics is important and want to be ahead of the curve, what is there to do?

Creating A Set Of Ethical Guidelines

My recommendation for any data company is to define a set of core ethical guidelines your company should adhere to. To accomplish this, follow these steps:

1. Define Your Guidelines

The guidelines should be created by inviting different parts of your organization to get a balanced and mixed view of what the company sees as acceptable use cases for its insights and data. In my experience, including different departments, such as commercial and engineering, people from different nationalities and all geographies, if your companies operate in multiple markets, is crucial in getting a nuanced and healthy view of what the company, its employees and stakeholders see as ethically acceptable.

The info is here.

Wednesday, April 15, 2020

How to be a more ethical Amazon shopper during the pandemic

Samantha Murphy Kelly
CNN.org
Updated on 13 April 20

Here is an excerpt:

For customers who may feel uneasy about these workplace issues but are desperate for household goods, there are a range of options to shop more consciously, from avoiding unnecessary purchases on the platform and tipping Amazon's grocery delivery workers handsomely to buying more from local stores online. But there are conflicting views on whether the best way to be an ethical shopper at this moment means not shopping from Amazon at all, especially given its position as one of the biggest hirers during a severe labor market crunch.

"If people choose to work at Amazon, we should respect their decisions," said Peter Singer, an ethics professor at Princeton University and author of "The Most Good You Can Do: How Effective Altruism Is Changing Ideas About Living Ethically."

The US Department of Labor announced Thursday that about 6.6 million people filed for unemployment benefits in the last week alone, bringing the number of lost jobs during the pandemic to nearly 17 million. Singer highlighted how delivery services are one of the few areas in which businesses are hiring.

But Christian Smalls, the former Amazon employee who partially organized a protest calling for senior warehouse officials to close the Staten Island, New York, facility for deep cleaning after multiple cases of the virus emerged there, advises otherwise. (The company later fired Smalls, citing he did not stay in quarantine after exposure to someone who tested positive.)

"If you want to practice real social distancing, stop pressing the buy button," Smalls told CNN Business. "You'll be saving lives. I understand that people need groceries and certain items, depending where you live, are limited. But people are buying things they don't need and it's putting workers' health at risk."

Although the issue is complex, shoppers who decide to continue using Amazon, or any online delivery platform, can keep a few best practices in mind.

The info is here.

Why We Already Have False Memories of the COVID-19 Crisis

Julia Shaw
Psychology Today
Originally posted 10 April 20

Here is an excerpt:

How this pandemic is giving you false memories

What this means is that if you already have false memories of what you have done, heard, or seen during the COVID-19 pandemic, you probably can't spot them. Both your memory of the news, and your memories of emotional events that are happening in your life are possibly being changed or contaminated.

A few things make us more susceptible to false memories right now:
  • Source confusion. What we learn from multiple sources about the same topic can very quickly become confused. This can lead to creating false memories based on source confusion, which is when we misattribute where you learned something. For example, in reality it was your weird uncle who said that thing, but your brain may incorrectly be sure it was the BBC. That's a small false memory, but at a time like this, it can have profound effects. Especially when that thing is a dangerous misconception.
  • Fake news. Some of the content we see online will be false or misleading. Reading headlines or posts in a rush, we may not realise that a story is from an unreliable source. Fake news is often specifically created to be memorable, and to influence our thoughts and behaviour. According to research, people are "most susceptible to forming false memories for fake news that aligns with their beliefs" (3).
  • Co-witness contamination. We are all witnesses of this world event, witnesses who are talking to each other all the time. If the COVID-19 pandemic were a crime scene, this would be really bad news. Witnesses can influence one another, and their memories tend to blend - these are called co-witness effects. As found repeatedly in research, "Witnesses who discuss an event with others often incorporate misinformation encountered during the discussion into their memory of the event" (4).
  • Sameness. Every day we hear unprecedented news or horrific medical stories. But after weeks or months of the same type of information, with a reduction in the amount of new and exciting things happening elsewhere, it gets difficult to separate this long stream of information into meaningful bits. Brains aren't made for sameness; they want separation and novelty. This means that whether it's another "how are you" conversation, another statistic, or another day at home...our memories are blending. This makes it easy to get memories mixed up, even important ones.

The info is here.

Tuesday, April 14, 2020

Don't just look for the helpers. Be a helper

Elissa Strauss
cnn.com
Originally posted 3 April 20

Here is an excerpt:

One of the easiest ways to teach your children to be helpers is by doing more helping yourself.

"Modeling, also called observational learning, is one of the most underestimated and poorly used tools by parents," said Alan Kazdin, professor of psychology and child psychiatry at Yale University.

Kazdin said modeling generosity can begin by simply appreciating generosity in others. Hear about something nice someone did for someone else? Point it out.

When parents do it themselves, they should make a habit of telling their children about it. Though, importantly, do not boast about it. "Be instructive, kind and gentle, rather than righteous," Kazdin said. (This should not be an opportunity for parents to toot their own horns.)

The amazing thing about modeling, Kazdin explained, is how it can teach our children skills without them ever actually doing anything. We can change who they are just by being the people we want them to become.

Kazdin said the brain's mirror networks — the marvelous trick of the mind that allows us to feel as though we are doing what we see others doing — is probably responsible. Our kids can experience the arc of giving, the initial flush of generosity, the execution of act and the helper's high, through us.

The info is here.


New Data Rules Could Empower Patients but Undermine Their Privacy

Natasha Singer
The New York Times
Originally posted 9 March 20

Here is an excerpt:

The Department of Health and Human Services said the new system was intended to make it as easy for people to manage their health care on smartphones as it is for them to use apps to manage their finances.

Giving people access to their medical records via mobile apps is a major milestone for patient rights, even as it may heighten risks to patient privacy.

Prominent organizations like the American Medical Association have warned that, without accompanying federal safeguards, the new rules could expose people who share their diagnoses and other intimate medical details with consumer apps to serious data abuses.

Although Americans have had the legal right to obtain a copy of their personal health information for two decades, many people face obstacles in getting that data from providers.

Some physicians still require patients to pick up computer disks — or even photocopies — of their records in person. Some medical centers use online portals that offer access to basic health data, like immunizations, but often do not include information like doctors’ consultation notes that might help patients better understand their conditions and track their progress.

The new rules are intended to shift that power imbalance toward the patient.

The info is here.

Monday, April 13, 2020

Lawmakers Push Again for Info on Google Collecting Patient Data

Rob Copeland
Wall Street Journal
Originally published 3 March 20

A bipartisan trio of U.S. senators pushed again for answers on Google’s controversial “Project Nightingale,” saying the search giant evaded requests for details on its far-reaching data tie-up with health giant Ascension.

The senators, in a letter Monday to St. Louis-based Ascension, said they were put off by the lack of substantive disclosure around the effort.

Project Nightingale was revealed in November in a series of Wall Street Journal articles that described Google’s then-secret engagement to collect and crunch the personal health information of millions of patients across 21 states.

Sens. Richard Blumenthal (D., Conn.), Bill Cassidy (R., La.), and Elizabeth Warren (D., Mass.) subsequently wrote to the Alphabet Inc. GOOG +1.35% unit seeking basic information about the program, including the number of patients involved, the data shared and who at Google had access.

The head of Google Health, Dr. David Feinberg, responded with a letter in December that largely stuck to generalities, according to correspondence reviewed by the Journal.

(cut)

Ascension earlier this year fired an employee who had reached out to media, lawmakers and regulators with concerns about Project Nightingale, a person familiar with the matter said. 

The employee, who described himself as a whistleblower, was told by Ascension higher-ups that he had shared information about the initiative that was intended to be secret, the person said.

Nick Ragone, a spokesman for Ascension—one of the U.S.’s largest health-care systems with 2,600 hospitals, doctors’ offices and other facilities—declined to say why the employee in question was fired. 

Which Legal Approaches Help Limit Harms to Patients From Clinicians’ Conscience-Based Refusals?

R. Kogan, K. Kraschel, & C. Haupt
AMA J Ethics. 2020;22(3):E209-216.
doi: 10.1001/amajethics.2020.209.

Abstract

This article canvasses laws protecting clinicians’ conscience and focuses on dilemmas that occur when a clinician refuses to perform a procedure consistent with the standard of care. In particular, the article focuses on patients’ experience with a conscientiously objecting clinician at a secular institution, where patients are least likely to expect conscience-based care restrictions. After reviewing existing laws that protect clinicians’ conscience, the article discusses limited legal remedies available to patients.

Potential Sites of Conflict

Clinicians who object to providing care on the basis of “conscience” have never been more robustly protected than today by state legislatures and federal law. Although US law as well as professional ethics allows clinicians to deviate from professional norms and standards when their religious or moral beliefs conflict with a requested service,1 the scope of legal remedies for patients harmed by these objections has shrunk as federal and state law has effectively insulated objecting clinicians from liability. This article outlines laws protecting clinician conscience and identifies questions that arise when a clinician refuses to perform a procedure consistent with the medical profession’s standard of care. We focus on patients seeking care at secular institutions where patients are least likely to have notice that care they receive could be restricted based upon an individual clinician’s refusal. As a result, patients may unknowingly receive substandard care from objecting physicians and even be harmed by their refusals. However, the legal remedies available to patients adversely affected by refusals are limited. We first discuss federal and state law governing refusals based on clinician conscience and then examine the remedies available to patients who suffer harm as a result of a physician’s refusal.

The info is here.

Sunday, April 12, 2020

On the Willingness to Report and the Consequences of Reporting Research Misconduct: The Role of Power Relations.

Horbach, S.P.J.M., et al.
Sci Eng Ethics (2020).
https://doi.org/10.1007/s11948-020-00202-8

Abstract

While attention to research integrity has been growing over the past decades, the processes of signalling and denouncing cases of research misconduct remain largely unstudied. In this article, we develop a theoretically and empirically informed understanding of the causes and consequences of reporting research misconduct in terms of power relations. We study the reporting process based on a multinational survey at eight European universities (N = 1126). Using qualitative data that witnesses of research misconduct or of questionable research practices provided, we aim to examine actors’ rationales for reporting and not reporting misconduct, how they report it and the perceived consequences of reporting. In particular we study how research seniority, the temporality of work appointments, and gender could impact the likelihood of cases being reported and of reporting leading to constructive organisational changes. Our findings suggest that these aspects of power relations play a role in the reporting of research misconduct. Our analysis contributes to a better understanding of research misconduct in an academic context. Specifically, we elucidate the processes that affect researchers’ ability and willingness to report research misconduct, and the likelihood of universities taking action. Based on our findings, we outline specific propositions that future research can test as well as provide recommendations for policy improvement.

From the Conclusion:

We also find that contested forms of misconduct (e.g. authorship, cherry picking of data and fabrication of data) are less likely to be reported than more clear-cut instances of misconduct (e.g. plagiarism, text recycling and falsification of data). The respondents mention that minor misbehaviour is not considered worth reporting, or express doubts about the effectiveness of reporting a case when the witnessed behaviour does not explicitly transgress norms, such as with many of the QRPs. Concern about reporting’s negative consequences, such as career opportunities or organisational reputations being harmed, is always taken into considerations.

Secondly, we have theorised the relationship between power differences and researchers’ willingness to report—in particular the role of seniority, work appointments and gender. We have derived a list of seven propositions that we believe warrant testing and refinement in future studies using a larger sample to help with further theory building about power differences and research misconduct.

The info is here.

Saturday, April 11, 2020

The Tyranny of Time: How Long Does Effective Therapy Really Take?

Jonathan Shedler & Enrico Gnaulati
Psychotherapy Networker
Originally posted March/April 20

Here is an excerpt:

Like the Consumer Reports study, this study also found a dose–response relation between therapy sessions and improvement. In this case, the longer therapy continued, the more clients achieved clinically significant change. So just how much therapy did it take? It took 21 sessions, or about six months of weekly therapy, for 50 percent of clients to see clinically significant change. It took more than 40 sessions, almost a year of weekly therapy, for 75 percent to see clinically significant change.

Information from the surveys of clients and therapists turned out to be pretty spot on. Three independent data sources converge on similar time frames. Every client is different, and no one can predict how much therapy is enough for a specific person, but on average, clinically meaningful change begins around the six-month mark and grows from there. And while some people will get what they need with less therapy, others will need a good deal more.

This is consistent with what clinical theorists have been telling us for the better part of a century. It should come as no surprise. Nothing of deep and lasting value is cheap or easy, and changing oneself and the course of one’s life may be most valuable of all.

Consider what it takes to master any new and complex skill, say learning a language, playing a musical instrument, learning to ski, or becoming adept at carpentry. With six months of practice, you might attain beginner- or novice-level proficiency, maybe. If someone promised to make you an expert in six months, you’d suspect they were selling snake oil. Meaningful personal development takes time and effort. Why would psychotherapy be any different?

The info is here.

Friday, April 10, 2020

Better the Two Devils You Know, Than the One You Don’t: Predictability Influences Moral Judgment

A. Walker, M. Turpin, & others
PsyArXiv Preprints
Updated 6 April 20

Abstract

Across four studies (N =1,806 US residents), we demonstrate the role perceptions of predictability play in judgments of moral character, finding that less predictable agents were also judged as less moral. Participants judged agents performing an immoral action (e.g., assault) for an unintelligible reason as less predictable and less moral than agents performing the same immoral action for a well-understood immoral reason (Studies 1-3). Additionally, agents performing an action in an unusual way were judged as less predictable and less moral than those performing the same action in a common manner (Study 4). These results challenge monist theories of moral psychology, which reduce morality to a single dimension (e.g., harm) and pluralist accounts failing to consider the role predictability plays in moral judgments. We propose that predictability influences judgments of moral character for its ultimate role in facilitating cooperation and discuss how these findings may be accommodated by theories of morality-as-cooperation.

From the General Discussion

Supporting the idea that judgments of predictability guide judgments of moral character, we show that people judge agents they perceive as less predictable to be less moral. Those signalling unpredictability with their actions, either by acting without an intelligible motive(Studies 1-3)or by performing an immoral act in an unusual manner(Study 4), are consistently viewed as possessing an especially poor moral character.

Despite its importance for cooperation, and therefore moral judgments (Curry, 2016; Curry et al., 2019; Greene, 2013; Haidt, 2012; Rai& Fiske, 2011; Tomasello & Vaish, 2013), dominant theories of moral psychology have not explicitly considered the role predictability plays in judgments of moral character. Here we presented novel scenarios for which many popular theoretical frameworks fail to accurately capture participants’ moral impressions.

The research is here.