Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, February 24, 2020

Physician Burnout Is Widespread, Especially Among Those in Midcareer

Brianna Abbott
The Wall Street Journal
Originally posted 15 Jan 20

Burnout is particularly pervasive among health-care workers, such as physicians or nurses, researchers say. Risk for burnout among physicians is significantly greater than that of general U.S. working adults, and physicians also report being less satisfied with their work-life balance, according to a 2019 study published in Mayo Clinic Proceedings.

Overall, 42% of the physicians in the new survey, across 29 specialties, reported feeling some sense of burnout, down slightly from 46% in 2015.

The report, published on Wednesday by medical-information platform Medscape, breaks down the generational differences in burnout and how doctors cope with the symptoms that are widespread throughout the profession.

“There are a lot more similarities than differences, and what that highlights is that burnout in medicine right now is really an entire-profession problem,” said Colin West, a professor of medicine at the Mayo Clinic who researches physician well-being. “There’s really no age group, career stage, gender or specialty that’s immune from these issues.”

In recent years, hospitals, health systems and advocacy groups have tried to curb the problem by starting wellness programs, hiring chief wellness officers or attempting to reduce administrative tasks for nurses and physicians.

Still, high rates of burnout persist among the medical community, from medical-school students to seasoned professionals, and more than two-thirds of all physicians surveyed in the Medscape report said that burnout had an impact on their personal relationships.

Nearly one in five physicians also reported that they are depressed, with the highest rate, 18%, reported by Gen Xers.

The info is here.

An emotionally intelligent AI could support astronauts on a trip to Mars

Neel Patel
MIT Technology Review
Originally published 14 Jan 20

Here are two excerpts:

Keeping track of a crew’s mental and emotional health isn’t really a problem for NASA today. Astronauts on the ISS regularly talk to psychiatrists on the ground. NASA ensures that doctors are readily available to address any serious signs of distress. But much of this system is possible only because the astronauts are in low Earth orbit, easily accessible to mission control. In deep space, you would have to deal with lags in communication that could stretch for hours. Smaller agencies or private companies might not have mental health experts on call to deal with emergencies. An onboard emotional AI might be better equipped to spot problems and triage them as soon as they come up.

(cut)

Akin’s biggest obstacles are those that plague the entire field of emotional AI. Lisa Feldman Barrett, a psychologist at Northeastern University who specializes in human emotion, has previously pointed out that the way most tech firms train AI to recognize human emotions is deeply flawed. “Systems don’t recognize psychological meaning,” she says. “They recognize physical movements and changes, and they infer psychological meaning.” Those are certainly not the same thing.

But a spacecraft, it turns out, might actually be an ideal environment for training and deploying an emotionally intelligent AI. Since the technology would be interacting with just the small group of people onboard, says Barrett, it would be able to learn each individual’s “vocabulary of facial expressions” and how they manifest in the face, body, and voice.

The info is here.

Sunday, February 23, 2020

Burnout as an ethical issue in psychotherapy.

Simionato, G., Simpson, S., & Reid, C.
Psychotherapy, 56(4), 470–482.

Abstract

Recent studies highlight a range of factors that place psychotherapists at risk of burnout. The aim of this study was to investigate the ethics issues linked to burnout among psychotherapists and to describe potentially effective ways of reducing vulnerability and preventing collateral damage. A purposive critical review of the literature was conducted to inform a narrative analysis. Differing burnout presentations elicit a wide range of ethics issues. High rates of burnout in the sector suggest systemic factors and the need for an ethics review of standard workplace practice. Burnout costs employers and taxpayers billions of dollars annually in heightened presenteeism and absenteeism. At a personal level, burnout has been linked to poorer physical and mental health outcomes for psychotherapists. Burnout has also been shown to interfere with clinical effectiveness and even contribute to misconduct. Hence, the ethical impact of burnout extends to our duty of care to clients and responsibilities to employers. A range of occupational and personal variables have been identified as vulnerability factors. A new 5-P model of prevention is proposed, which combines systemic and individually tailored responses as a means of offering the greatest potential for effective prevention, identification, and remediation. In addition to the significant economic impact and the impact on personal well-being, burnout in psychotherapists has the potential to directly and indirectly affect client care and standards of professional practice. Attending to the ethical risks associated with burnout is a priority for the profession, for service managers, and for each individual psychotherapist.

From the Conclusion:

Burnout is a common feature of unintentional misconduct among psychotherapists, often at the expense of client well-being, therapeutic progress, and successful client outcomes. Clinicians working in spite of burnout also incur personal and economic costs that compromise the principles of competence and beneficence outlined in ethical guidelines. This article has focused on a communitarian approach to identifying, understanding, and responding to the signs, symptoms, and risk factors in an attempt to harness ethical practice and foster successful careers in psychotherapy. The 5-P strength-based model illuminates the positive potential of workplaces that support wellbeing and prioritize ethical practice through providing an individualized responsiveness to the training, professional development, and support needs of staff. Further, in contrast to the majority of the literature that explores organizational factors leading to burnout and ethical missteps, the 5-P model also considers the personal characteristics that may contribute to burnout and the personal action that
psychotherapists can take to avoid burnout and unintentional misconduct.

The info is here.

Saturday, February 22, 2020

Hospitals Give Tech Giants Access to Detailed Medical Records

Melanie Evans
The Wall Street Journal
Originally published 20 Jan 20

Here is an excerpt:

Recent revelations that Alphabet Inc.’s Google is able to tap personally identifiable medical data about patients, reported by The Wall Street Journal, has raised concerns among lawmakers, patients and doctors about privacy.

The Journal also recently reported that Google has access to more records than first disclosed in a deal with the Mayo Clinic.

Mayo officials say the deal allows the Rochester, Minn., hospital system to share personal information, though it has no current plans to do so.

“It was not our intention to mislead the public,” said Cris Ross, Mayo’s chief information officer.

Dr. David Feinberg, head of Google Health, said Google is one of many companies with hospital agreements that allow the sharing of personally identifiable medical data to test products used in treatment and operations.

(cut)

Amazon, Google, IBM and Microsoft are vying for hospitals’ business in the cloud storage market in part by offering algorithms and technology features. To create and launch algorithms, tech companies are striking separate deals for access to medical-record data for research, development and product pilots.

The Health Insurance Portability and Accountability Act, or HIPAA, lets hospitals confidentially send data to business partners related to health insurance, medical devices and other services.

The law requires hospitals to notify patients about health-data uses, but they don’t have to ask for permission.

Data that can identify patients—including name and Social Security number—can’t be shared unless such records are needed for treatment, payment or hospital operations. Deals with tech companies to develop apps and algorithms can fall under these broad umbrellas. Hospitals aren’t required to notify patients of specific deals.

The info is here.

Friday, February 21, 2020

Friends or foes: Is empathy necessary for moral behavior?

Jean Decety and Jason M. Cowell
Perspect Psychol Sci. 2014 Sep; 9(4): 525–537.
doi: 10.1177/1745691614545130

Abstract

The past decade has witnessed a flurry of empirical and theoretical research on morality and empathy, as well as increased interest and usage in the media and the public arena. At times, in both popular and academia, morality and empathy are used interchangeably, and quite often the latter is considered to play a foundational role for the former. In this article, we argue that, while there is a relationship between morality and empathy, it is not as straightforward as apparent at first glance. Moreover, it is critical to distinguish between the different facets of empathy (emotional sharing, empathic concern, and perspective taking), as each uniquely influences moral cognition and predicts differential outcomes in moral behavior. Empirical evidence and theories from evolutionary biology, developmental, behavioral, and affective and social neuroscience are comprehensively integrated in support of this argument. The wealth of findings illustrates a complex and equivocal relationship between morality and empathy. The key to understanding such relations is to be more precise on the concepts being used, and perhaps abandoning the muddy concept of empathy.

From the Conclusion:

To wrap up on a provocative note, it may be advantageous for the science of morality, in the future, to refrain from using the catch-all term of empathy, which applies to a myriad of processes and phenomena, and as a result yields confusion in both understanding and predictive ability. In both academic and applied domains such medicine, ethics, law and policy, empathy has become an enticing, but muddy notion, potentially leading to misinterpretation. If ancient Greek philosophy has taught us anything, it is that when a concept is attributed with so many meanings, it is at risk for losing function.

The article is here.

Why Google thinks we need to regulate AI

Sundar Pichai
ft.com
Originally posted 19 Jan 20

Here are two excerpts:

Yet history is full of examples of how technology’s virtues aren’t guaranteed. Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents. The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread.

These lessons teach us that we need to be clear-eyed about what could go wrong. There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition. While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone.

(cut)

But principles that remain on paper are meaningless. So we’ve also developed tools to put them into action, such as testing AI decisions for fairness and conducting independent human-rights assessments of new products. We have gone even further and made these tools and related open-source code widely available, which will empower others to use AI for good. We believe that any company developing new AI tools should also adopt guiding principles and rigorous review processes.

Government regulation will also play an important role. We don’t have to start from scratch. Existing rules such as Europe’s General Data Protection Regulation can serve as a strong foundation. Good regulatory frameworks will consider safety, explainability, fairness and accountability to ensure we develop the right tools in the right ways. Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities.

Regulation can provide broad guidance while allowing for tailored implementation in different sectors. For some AI uses, such as regulated medical devices including AI-assisted heart monitors, existing frameworks are good starting points. For newer areas such as self-driving vehicles, governments will need to establish appropriate new rules that consider all relevant costs and benefits.


Thursday, February 20, 2020

Harvey Weinstein’s ‘false memory’ defense is not backed by science

Anne DePrince & Joan Cook
The Conversation
Originally posted 10 Feb 20

Here is an excerpt:

In 1996, pioneering psychologist Jennifer Freyd introduced the concept of betrayal trauma. She made plain how forgetting, not thinking about and even mis-remembering an assault may be necessary and adaptive for some survivors. She argued that the way in which traumatic events, like sexual violence, are processed and remembered depends on how much betrayal there is. Betrayal happens when the victim depends on the abuser, such as a parent, spouse or boss. The victim has to adapt day-to-day because they are (or feel) stuck in that relationship. One way that victims can survive is by thinking or remembering less about the abuse or telling themselves it wasn’t abuse.

Since 1996, compelling scientific evidence has shown a strong relationship between amnesia and victims’ dependence on abusers. Psychologists and other scientists have also learned much about the nature of memory, including memory for traumas like sexual assault. What gets into memory and later remembered is affected by a host of factors, including characteristics of the person and the situation. For example, some individuals dissociate during or after traumatic events. Dissociation offers a way to escape the inescapable, such that people feel as if they have detached from their bodies or the environment. It is not surprising to us that dissociation is linked with incomplete memories.

Memory can also be affected by what other people do and say. For example, researchers recently looked at what happened when they told participants not to think about some words that they had just studied. Following that instruction, those who had histories of trauma suppressed more memories than their peers did.

The info is here.

Sharing Patient Data Without Exploiting Patients

McCoy MS, Joffe S, Emanuel EJ.
JAMA. Published online January 16, 2020.
doi:10.1001/jama.2019.22354

Here is an excerpt:

The Risks of Data Sharing

When health systems share patient data, the primary risk to patients is the exposure of their personal health information, which can result in a range of harms including embarrassment, stigma, and discrimination. Such exposure is most obvious when health systems fail to remove identifying information before sharing data, as is alleged in the lawsuit against Google and the University of Chicago. But even when shared data are fully deidentified in accordance with the requirements of the Health Insurance Portability and Accountability Act reidentification is possible, especially when patient data are linked with other data sets. Indeed, even new data privacy laws such as Europe's General Data Protection Regulation and California's Consumer Privacy Act do not eliminate reidentification risk.

Companies that acquire patient data also accept risk by investing in research and development that may not result in marketable products. This risk is less ethically concerning, however, than that borne by patients. While companies usually can abandon unpromising ventures, patients’ lack of control over data-sharing arrangements makes them vulnerable to exploitation. Patients lack control, first, because they may have no option other than to seek care in a health system that plans to share their data. Second, even if patients are able to authorize sharing of their data, they are rarely given the information and opportunity to ask questions needed to give meaningful informed consent to future uses of their data.

Thus, for the foreseeable future, data sharing will entail ethically concerning risks to patients whose data are shared. But whether these exchanges are exploitative depends on how much benefit patients receive from data sharing.

The info is here.

Wednesday, February 19, 2020

American Psychological Association Calls for Immediate Halt to Sharing Immigrant Youths' Confidential Psychotherapy Notes with ICE

American Psychological Association
Press Release
Released 17 Feb 20

The American Psychological Association expressed shock and outrage that the federal Office of Refugee Resettlement has been sharing confidential psychotherapy notes with U.S. Immigration and Customs Enforcement to deny asylum to some immigrant youths.

“ORR’s sharing of confidential therapy notes of traumatized children destroys the bond of trust between patient and therapist that is vital to helping the patient,” said APA President Sandra L. Shullman, PhD. “We call on ORR to stop this practice immediately and on the Department of Health and Human Services and Congress to investigate its prevalence. We also call on ICE to release any immigrants who have had their asylum requests denied as a result.”

APA was reacting to a report in The Washington Post focused largely on the case of then-17-year-old Kevin Euceda, an asylum-seeker from Honduras whose request for asylum was granted by a judge, only to have it overturned when lawyers from ICE revealed information he had given in confidence to a therapist at a U.S. government shelter. According to the article, other unaccompanied minors have been similarly detained as a result of ICE’s use of confidential psychotherapy notes. These situations have also been confirmed by congressional testimony since 2018.

Unaccompanied minors who are detained in U.S. shelters are required to undergo therapy, ostensibly to help them deal with trauma and other issues arising from leaving their home countries. According to the Post, ORR entered into a formal memorandum of agreement with ICE in April 2018 to share details about children in its care. The then-head of ORR testified before Congress that the agency would be asking its therapists to “develop additional information” about children during “weekly counseling sessions where they may self-disclose previous gang or criminal activity to their assigned clinician,” the newspaper reported. The agency added two requirements to its public handbook: that arriving children be informed that while it was essential to be honest with staff, self-disclosures could affect their release and that if a minor mentioned anything having to do with gangs or drug dealing, therapists would file a report within four hours to be passed to ICE within one day, the Post said.

"For this administration to weaponize these therapy sessions by ordering that the psychotherapy notes be passed to ICE is appalling,” Shullman added. “These children have already experienced some unimaginable traumas. Plus, these are scared minors who may not understand that speaking truthfully to therapists about gangs and drugs – possibly the reasons they left home – would be used against them.”