Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, philosophy and health care

Wednesday, November 14, 2018

Moral resilience: how to navigate ethical complexity in clinical practice

Cynda Rushton
Oxford University Press
Originally posted October 12, 2018

Clinicians are constantly confronted with ethical questions. Recent examples of healthcare workers caught up in high-profile best-interest cases are on the rise, but decisions regarding the allocation of the clinician’s time and skills, or scare resources such as organs and medication, are everyday occurrences. The increasing pressure of “doing more with less” is one that can take its toll.

Dr Cynda Rushton is a professor of clinical ethics, and a proponent of ‘moral resilience’ as a pathway through which clinicians can lessen their experience of moral distress, and navigate the contentious issues they may face with a greater sense of integrity. In the video series below, she provides the guiding principles of moral resilience, and explores how they can be put into practice.



The videos are here.

Keeping Human Stories at the Center of Health Care

M. Bridget Duffy
Harvard Business Review
Originally published October 8, 2018

Here is an excerpt:

A mentor told me early in my career that only 20% of healing involves the high-tech stuff. The remaining 80%, he said, is about the relationships we build with patients, the physical environments we create, and the resources we provide that enable patients to tap into whatever they need for spiritual sustenance. The longer I work in health care, the more I realize just how right he was.

How do we get back to the 80-20 rule? By placing the well-being of patients and care teams at the top of the list for every initiative we undertake and every technology we introduce. Rather than just introducing technology with no thought as to its impact on clinicians — as happened with many rollouts of electronic medical records (EMRs) — we need to establish a way to quantifiably measure whether a new technology actually improves a clinician’s workday and ability to deliver care or simply creates hassles and inefficiency. Let’s develop an up-front “technology ROI” that measures workflow impact, inefficiency, hassle and impact on physician and nurse well-being.

The National Taskforce for Humanity in Healthcare, of which I am a founding member, is piloting a system of metrics for well-being developed by J. Bryan Sexton of Duke University Medical Center. Instead of measuring burnout or how broken health care people are, Dr. Sexton’s metrics focus on emotional thriving and emotional resilience. (The former are how strongly people agree or disagree to these statements: “I have a chance to use my strengths every day at work,” “I feel like I am thriving at my job,” “I feel like I am making a meaningful difference at my job,” and “I often have something that I am very looking forward to at my job.”

The info is here.

Tuesday, November 13, 2018

Mozilla’s ambitious plan to teach coders not to be evil

Katherine Schwab
Fast Company
Originally published October 10, 2018

Here is an excerpt:

There’s already a burgeoning movement to integrate ethics into the computer science classroom. Harvard and MIT have launched a joint class on the ethics of AI. UT Austin has an ethics class for computer science majors that it plans to eventually make a requirement. Stanford similarly is developing an ethics class within its computer science department. But many of these are one-off initiatives, and a national challenge of this type will provide the resources and incentive for more universities to think about these questions–and theoretically help the best ideas scale across the country.

Still, Baker says she’s sometimes cynical about how much impact ethics classes will have without broader social change. “There’s a lot of power and institutional pressure and wealth” in making decisions that are good for business, but might be bad for humanity, Baker says. “The fact you had some classes in ethics isn’t going to overcome all that and make things perfect. People have many motivations.”

Even so, teaching young people how to think about tech’s implications with nuance could help to combat some of those other motivations–primarily, money. The conversation shouldn’t be as binary as code; it should acknowledge typical ways data is used and help young technologists talk and think about the difference between providing value and being invasive.

The info is here.

Delusions and Three Myths of Irrational Belief

Bortolotti L. (2018) Delusions and Three Myths of Irrational Belief.
In: Bortolotti L. (eds) Delusions in Context. Palgrave Macmillan, Cham

Abstract

This chapter addresses the contribution that the delusion literature has made to the philosophy of belief. Three conclusions will be drawn: (1) a belief does not need to be epistemically rational to be used in the interpretation of behaviour; (2) a belief does not need to be epistemically rational to have significant psychological or epistemic benefits; (3) beliefs exhibiting the features of epistemic irrationality exemplified by delusions are not infrequent, and they are not an exception in a largely rational belief system. What we learn from the delusion literature is that there are complex relationships between rationality and interpretation, rationality and success, and rationality and knowledge.

The chapter is here.

Here is a portion of the Conclusion:

Second, it is not obvious that epistemically irrational beliefs should be corrected, challenged, or regarded as a glitch in an otherwise rational belief system. The whole attitude towards such beliefs should change. We all have many epistemically irrational beliefs, and they are not always a sign that we lack credibility or we are mentally unwell. Rather, they are predictable features of human cognition (Puddifoot and Bortolotti, 2018). We are not unbiased in the way we weigh up evidence and we tend to be conservative once we have adopted a belief, making it hard for new contrary evidence to unsettle our existing convictions. Some delusions are just a vivid illustration of a general tendency that is widely shared and hard to counteract. Delusions, just like more common epistemically irrational beliefs, may be a significant obstacle to the achievements of our goals and may cause a rift between our way of seeing the world and other people’s way. That is why it is important to develop a critical attitude towards their content.

Monday, November 12, 2018

7 Ways Marketers Can Use Corporate Morality to Prepare for Future Data Privacy Laws

Patrick Hogan
Adweek.com
Originally posted October 10, 2018

Here is an excerpt:

Many organizations have already made responsible adjustments in how they communicate with users about data collection and use and have become compliant to support recent laws. However, compliance does not always equal responsibility, and even though companies do require consent and provide information as required, linking to the terms of use, clicking a checkbox or double opting-in still may not be enough to stay ahead or protect consumers.

The best way to reduce the impact of the potential legislation is to take proactive steps now that set a new standard of responsibility in data use for your organization. Below are some measurable ways marketers can lead the way for the changing industry and creating a foundational perception shift away from data and back to the acknowledgment of putting other humans first.

Create an action plan for complete data control and transparency

Set standards and protocols for your internal teams to determine how you are going to communicate with each other and your clients about data privacy, thus creating a path for all employees to follow and abide by moving forward.

Map data in your organization from receipt to storage to expulsion

Accountability is key. As a business, you should be able to know and speak to what is being done with the data that you are collecting throughout each stage of the process.

The info is here.

Optimality bias in moral judgment

Julian De Freitas and Samuel G. B. Johnson
Journal of Experimental Social Psychology
Volume 79, November 2018, Pages 149-163

Abstract

We often make decisions with incomplete knowledge of their consequences. Might people nonetheless expect others to make optimal choices, despite this ignorance? Here, we show that people are sensitive to moral optimality: that people hold moral agents accountable depending on whether they make optimal choices, even when there is no way that the agent could know which choice was optimal. This result held up whether the outcome was positive, negative, inevitable, or unknown, and across within-subjects and between-subjects designs. Participants consistently distinguished between optimal and suboptimal choices, but not between suboptimal choices of varying quality — a signature pattern of the Efficiency Principle found in other areas of cognition. A mediation analysis revealed that the optimality effect occurs because people find suboptimal choices more difficult to explain and assign harsher blame accordingly, while moderation analyses found that the effect does not depend on tacit inferences about the agent's knowledge or negligence. We argue that this moral optimality bias operates largely out of awareness, reflects broader tendencies in how humans understand one another's behavior, and has real-world implications.

The research is here.

Sunday, November 11, 2018

Nine risk management lessons for practitioners.

Taube, Daniel O.,Scroppo, Joe,Zelechoski, Amanda D.
Practice Innovations, Oct 04 , 2018

Abstract

Risk management is an essential skill for professionals and is important throughout the course of their careers. Effective risk management blends a utilitarian focus on the potential costs and benefits of particular courses of action, with a solid foundation in ethical principles. Awareness of particularly risk-laden circumstances and practical strategies can promote safer and more effective practice. This article reviews nine situations and their associated lessons, illustrated by case examples. These situations emerged from our experience as risk management consultants who have listened to and assisted many practitioners in addressing the challenges they face on a day-to-day basis. The lessons include a focus on obtaining consent, setting boundaries, flexibility, attention to clinician affect, differentiating the clinician’s own values and needs from those of the client, awareness of the limits of competence, maintaining adequate legal knowledge, keeping good records, and routine consultation. We highlight issues and approaches to consider in these types of cases that minimize risks of adverse outcomes and enhance good practice.

The info is here.

Here is a portion of the article:

Being aware of basic legal parameters can help clinicians to avoid making errors in this complex arena. Yet clinicians are not usually lawyers and tend to have only limited legal knowledge. This gives rise to a risk of assuming more mastery than one may have.

Indeed, research suggests that a range of professionals, including psychotherapists, overestimate their capabilities and competencies, even in areas in which they have received substantial training (Creed, Wolk, Feinberg, Evans, & Beck, 2016; Lipsett, Harris, & Downing, 2011; Mathieson, Barnfield, & Beaumont, 2009; Walfish, McAlister, O’Donnell, & Lambert, 2012).

Saturday, November 10, 2018

Association Between Physician Burnout and Patient Safety, Professionalism, and Patient Satisfaction

Maria Panagioti, Keith Geraghty, Judith Johnson
JAMA Intern Med. 2018;178(10):1317-1330.
doi:10.1001/jamainternmed.2018.3713

Abstract

Objective  To examine whether physician burnout is associated with an increased risk of patient safety incidents, suboptimal care outcomes due to low professionalism, and lower patient satisfaction.

Data Sources  MEDLINE, EMBASE, PsycInfo, and CINAHL databases were searched until October 22, 2017, using combinations of the key terms physicians, burnout, and patient care. Detailed standardized searches with no language restriction were undertaken. The reference lists of eligible studies and other relevant systematic reviews were hand-searched.

Study Selection  Quantitative observational studies.

Data Extraction and Synthesis  Two independent reviewers were involved. The main meta-analysis was followed by subgroup and sensitivity analyses. All analyses were performed using random-effects models. Formal tests for heterogeneity (I2) and publication bias were performed.

Main Outcomes and Measures  The core outcomes were the quantitative associations between burnout and patient safety, professionalism, and patient satisfaction reported as odds ratios (ORs) with their 95% CIs.

Results  Of the 5234 records identified, 47 studies on 42 473 physicians (25 059 [59.0%] men; median age, 38 years [range, 27-53 years]) were included in the meta-analysis. Physician burnout was associated with an increased risk of patient safety incidents (OR, 1.96; 95% CI, 1.59-2.40), poorer quality of care due to low professionalism (OR, 2.31; 95% CI, 1.87-2.85), and reduced patient satisfaction (OR, 2.28; 95% CI, 1.42-3.68). The heterogeneity was high and the study quality was low to moderate. The links between burnout and low professionalism were larger in residents and early-career (≤5 years post residency) physicians compared with middle- and late-career physicians (Cohen Q = 7.27; P = .003). The reporting method of patient safety incidents and professionalism (physician-reported vs system-recorded) significantly influenced the main results (Cohen Q = 8.14; P = .007).

Conclusions and Relevance  This meta-analysis provides evidence that physician burnout may jeopardize patient care; reversal of this risk has to be viewed as a fundamental health care policy goal across the globe. Health care organizations are encouraged to invest in efforts to improve physician wellness, particularly for early-career physicians. The methods of recording patient care quality and safety outcomes require improvements to concisely capture the outcome of burnout on the performance of health care organizations.

Friday, November 9, 2018

Believing without evidence is always morally wrong

Francisco Mejia Uribe
aeon.co
Originally posted November 5, 2018

Here are two excerpts:

But it is not only our own self-preservation that is at stake here. As social animals, our agency impacts on those around us, and improper believing puts our fellow humans at risk. As Clifford warns: ‘We all suffer severely enough from the maintenance and support of false beliefs and the fatally wrong actions which they lead to …’ In short, sloppy practices of belief-formation are ethically wrong because – as social beings – when we believe something, the stakes are very high.

(cut)

Translating Clifford’s warning to our interconnected times, what he tells us is that careless believing turns us into easy prey for fake-news peddlers, conspiracy theorists and charlatans. And letting ourselves become hosts to these false beliefs is morally wrong because, as we have seen, the error cost for society can be devastating. Epistemic alertness is a much more precious virtue today than it ever was, since the need to sift through conflicting information has exponentially increased, and the risk of becoming a vessel of credulity is just a few taps of a smartphone away.

Clifford’s third and final argument as to why believing without evidence is morally wrong is that, in our capacity as communicators of belief, we have the moral responsibility not to pollute the well of collective knowledge. In Clifford’s time, the way in which our beliefs were woven into the ‘precious deposit’ of common knowledge was primarily through speech and writing. Because of this capacity to communicate, ‘our words, our phrases, our forms and processes and modes of thought’ become ‘common property’. Subverting this ‘heirloom’, as he called it, by adding false beliefs is immoral because everyone’s lives ultimately rely on this vital, shared resource.

The info is here.