Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, August 7, 2019

First do no harm: the impossible oath

Kamran Abbasi
BMJ 2019; 366
doi: https://doi.org/10.1136/bmj.l4734

Here is the beginning:

Discussions about patient safety describe healthcare as an industry. If that’s the case then what is healthcare’s business? What does it manufacture? Health and wellbeing? Possibly. But we know for certain that healthcare manufactures harm. Look at the data from our new research paper on the prevalence, severity, and nature of preventable harm (doi:10.1136/bmj.l4185). Maria Panagioti and colleagues find that the prevalence of overall harm, preventable and non-preventable, is 12% across medical care settings. Around half of this is preventable.

These data make something of a mockery of our principal professional oath to first do no harm. Working in clinical practice, we do harm that we cannot prevent or avoid, such as by appropriately prescribing a drug that causes an adverse drug reaction. As our experience, evidence, and knowledge improve, what isn’t preventable today may well be preventable in the future.

The argument, then, isn’t over whether healthcare causes harm but about the exact estimates of harm and how much of it is preventable. The answer that Panagioti and colleagues deliver from their systematic review of the available evidence is the best we have at the moment, though it isn’t perfect. The definitions of preventable harm differ. Existing studies are heterogeneous and focused more on overall rather than preventable harm. The standard method is the retrospective case record review. The need, say the authors, is for better research in all fields and more research on preventable harms in primary care, psychiatry, and developing countries, and among children and older adults.

Veil-of-Ignorance Reasoning Favors the Greater Good

Karen Huang Joshua D. Greene Max Bazerman
PsyArXiv
Originally posted July 2, 2019

Abstract

The “veil of ignorance” is a moral reasoning device designed to promote impartial decision-making by denying decision-makers access to potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here we apply veil-of-ignorance reasoning in a more focused way to specific moral dilemmas, all of which involve a tension between the greater good and competing moral concerns. Across six experiments (N = 5,785), three pre-registered, we find that veil-of-ignorance reasoning favors the greater good. Participants first engaged in veil-of-ignorance reasoning about a specific dilemma, asking themselves what they would want if they did not know who among those affected they would be. Participants then responded to a more conventional version of the same dilemma with a moral judgment, a policy preference, or an economic choice. Participants who first engaged in veil-of-ignorance reasoning subsequently made more utilitarian choices in response to a classic philosophical dilemma, a medical dilemma, a real donation decision between a more vs. less effective charity, and a policy decision concerning the social dilemma of autonomous vehicles. These effects depend on the impartial thinking induced by veil-of-ignorance reasoning and cannot be explained by a simple anchoring account, probabilistic reasoning, or generic perspective-taking. These studies indicate that veil-of-ignorance reasoning may be a useful tool for decision-makers who wish to make more impartial and/or socially beneficial choices.

The research is here.

Tuesday, August 6, 2019

Dante, Trump and the moral cowardice of the G.O.P.

Charlie Sykes
www.americamagazine.com
Originally published July 21, 2019

One of John F. Kennedy’s favorite quotes was something he thought came from Dante: “The hottest places in Hell are reserved for those who in time of moral crisis preserve their neutrality.”

As it turns out, the quote is apocryphal. But what Dante did write was far better, and it came vividly to mind last week as Republicans failed to take a stand after President Trump’s racist tweets and chants of “Send her back,” directed at Representative Ilhan Omar of Minnesota, who immigrated here from Somalia, at a Trump rally in North Carolina.

In Dante’s Inferno, the moral cowards are not granted admission to Hell; they are consigned to the vestibule, where they are doomed to follow a rushing banner that is blown about by the wind.

(cut)

Despite some feeble attempts at rationalization, there was clarity to the president’s language and his larger intent. Mr. Trump was not merely using racist tropes; he was calling forth something dark and dangerous.

The president did not invent or create the racism, xenophobia and ugliness on display last week; they were all pre-existing conditions. But simply because something is latent does not mean it will metastasize into something malignant or fatal. Just because there is a hot glowing ember does not mean that it will explode into a raging conflagration.

The info is here.

Ethics and automation: What to do when workers are displaced

Tracy Mayor
MIT School of Management
Originally published July 8, 2019

As companies embrace automation and artificial intelligence, some jobs will be created or enhanced, but many more are likely to go away. What obligation do organizations have to displaced workers in such situations? Is there an ethical way for business leaders to usher their workforces through digital disruption?

Researchers wrestled with those questions recently at MIT Technology Review’s EmTech Next conference. Their conclusion: Company leaders need to better understand the negative repercussions of the technologies they adopt and commit to building systems that drive economic growth and social cohesion.

Pramod Khargonekar, vice chancellor for research at University of California, Irvine, and Meera Sampath, associate vice chancellor for research at the State University of New York, presented findings from their paper, “Socially Responsible Automation: A Framework for Shaping the Future.”

The research makes the case that “humans will and should remain critical and central to the workplace of the future, controlling, complementing and augmenting the strengths of technological solutions.” In this scenario, automation, artificial intelligence, and related technologies are tools that should be used to enrich human lives and livelihoods.

Aspirational, yes, but how do we get there?

The info is here.

Monday, August 5, 2019

Ethics working group to hash out what kind of company service is off limits

Chris Marquette
www.rollcall.com
Originally published July 22, 2019

A House Ethics Committee working group on Thursday will discuss proposed regulations to govern what kind of roles lawmakers may perform in companies, part of a push to head off the kind of ethical issues that led to the federal indictment of Rep. Chris Collins, who is accused of trading insider information while simultaneously serving as a company board member and public official.

(cut)

House Resolution 6 created a new clause in the Code of Official Conduct — set to take effect Jan. 1, 2020 — that prohibits members, delegates, resident commissioners, officers or employees in the House from serving as an officer or director of any public company.

The clause required the Ethics Committee to develop by Dec. 31 regulations addressing other prohibited service or positions that could lead to conflicts of interest.

The info is here.

Ethical considerations in assessment and behavioral treatment of obesity: Issues and practice implications for clinical health psychologists

Williamson, T. M., Rash, J. A., Campbell, T. S., & Mothersill, K. (2019).
Professional Psychology: Research and Practice. Advance online publication.
http://dx.doi.org/10.1037/pro0000249

Abstract

The obesity epidemic in the United States and Canada has been accompanied by an increased demand on behavioral health specialists to provide comprehensive behavior therapy for weight loss (BTWL) to individuals with obesity. Clinical health psychologists are optimally positioned to deliver BTWL because of their advanced competencies in multimodal assessment, training in evidence-based methods of behavior change, and proficiencies in interdisciplinary collaboration. Although published guidelines provide recommendations for optimal design and delivery of BTWL (e.g., behavior modification, cognitive restructuring, and mindfulness practice; group-based vs. individual therapy), guidelines on ethical issues that may arise during assessment and treatment remain conspicuously absent. This article reviews clinical practice guidelines, ethical codes (i.e., the Canadian Code of Ethics for Psychologists and the American Psychological Association Ethical Principles of Psychologists), and the extant literature to highlight obesity-specific ethical considerations for psychologists who provide assessment and BTWL in health care settings. Five key themes emerge from the literature: (a) informed consent (instilling realistic treatment expectations; reasonable alternatives to BTWL; privacy and confidentiality); (b) assessment (using a biopsychosocial approach; selecting psychological tests); (c) competence and scope of practice (self-assessment; collaborative care); (d) recognition of personal bias and discrimination (self-examination, diversity); and (e) maximizing treatment benefit while minimizing harm. Practical recommendations grounded in the American Psychological Association’s competency training model for clinical health psychologists are discussed to assist practitioners in addressing and mitigating ethical issues in practice.

Sunday, August 4, 2019

First Steps Towards an Ethics of Robots and Artificial Intelligence

John Tasioulas
King's College London

Abstract

This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to exploring some illustrative issues arising under each rubric, the article also emphasizes a number of more general themes. These include: the multiplicity of interacting levels on which ethical questions about RAIs arise, the need to recognize that RAIs potentially implicate the full gamut of human values (rather than exclusively or primarily some readily identifiable sub-set of ethical or legal principles), and the need for practically salient ethical reflection on RAIs to be informed by a realistic appreciation of their existing and foreseeable capacities.

From the section: Ethical Questions: Frames and Levels

Difficult questions arise as to how best to integrate these three modes of regulating RAIs, and there is a serious worry about the tendency of industry-based codes of ethics to upstage democratically enacted law in this domain, especially given the considerable political clout wielded by the small number of technology companies that are driving RAI-related developments. However, this very clout creates the ever-present danger that powerful corporations may be able to shape any resulting laws in ways favourable to their interests rather than the common good (Nemitz 2018, 7). Part of the difficulty here stems from the fact that three levels of ethical regulation inter-relate in complex ways. For example, it may be that there are strong moral reasons against adults creating or using a robot as a sexual partner (third level). But, out of respect for their individual autonomy, they should be legally free to do so (first level). However, there may also be good reasons to cultivate a social morality that generally frowns upon such activities (second level), so that the sale and public display of sex robots is legally constrained in various ways (through zoning laws, taxation, age and advertising restrictions, etc.) akin to the legal restrictions on cigarettes or gambling (first level, again). Given this complexity, there is no a priori assurance of a single best way of integrating the three levels of regulation, although there will nonetheless be an imperative to converge on some universal standards at the first and second levels where the matter being addressed demands a uniform solution across different national jurisdictional boundaries.

The paper is here.

Saturday, August 3, 2019

When Do Robots Have Free Will? Exploring the Relationships between (Attributions of) Consciousness and Free Will

Eddy Nahmias, Corey Allen, & Bradley Loveall
Georgia State University

From the Conclusion:

If future research bolsters our initial findings, then it would appear that when people consider whether agents are free and responsible, they are considering whether the agents have capacities to feel emotions more than whether they have conscious sensations or even capacities to deliberate or reason. It’s difficult to know whether people assume that phenomenal consciousness is required for or enhances capacities to deliberate and reason. And of course, we do not deny that cognitive capacities for self-reflection, imagination, and reasoning are crucial for free and responsible agency (see, e.g., Nahmias 2018). For instance, once considering agents that are assumed to have phenomenal consciousness, such as humans, it is likely that people’s attributions of free will and responsibility decrease in response to information that an agent has severely diminished reasoning capacities. But people seem to have intuitions that support the idea that an essential condition for free will is the capacity to experience conscious emotions.  And we find it plausible that these intuitions indicate that people take it to be essential to being a free agent that one can feel the emotions involved in reactive attitudes and in genuinely caring about one’s choices and their outcomes.

(cut)

Perhaps, fiction points us towards the truth here. In most fictional portrayals of artificial intelligence and robots (such as Blade Runner, A.I., and Westworld), viewers tend to think of the robots differently when they are portrayed in a way that suggests they express and feel emotions.  No matter how intelligent or complex their behavior, the robots do not come across as free and autonomous until they seem to care about what happens to them (and perhaps others). Often this is portrayed by their showing fear of their own or others’ deaths, or expressing love, anger, or joy. Sometimes it is portrayed by the robots’ expressing reactive attitudes, such as indignation about how humans treat them, or our feeling such attitudes towards them, for instance when they harm humans.

The research paper is here.

Friday, August 2, 2019

Therapist accused of sending client photos of herself in lingerie can’t get her state license back: Pa. court

Matt Miller
www.pennlive.com
Originally posted July 17, 2019

A therapist who was accused of sending a patient photos of herself in lingerie can’t have her state counseling license back, a Commonwealth Court panel ruled Wednesday.

That is so even though Sheri Colston denied sending those photos or having any inappropriate interactions with the male client, the court found in an opinion by Judge Robert Simpson.

The court ruling upholds an indefinite suspension of Colston’s license imposed by the State Board of Social Workers, Marriage and Family Therapists and Professional Counselors. That board also ordered Colston to pay $7,409 to cover the cost of investigating her case.

The info is here.