Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, philosophy and health care

Wednesday, January 24, 2018

Top 10 lies doctors tell themselves

Pamela Wible
www.idealmedicalcare.org
Originally published December 27, 2017

Here is an excerpt:

Sydney Ashland: “I must overwork and overextend myself.” I hear this all the time. Workaholism, alcoholism, self-medicating. These are the top coping strategies that we, as medical professionals, use to deal with unrealistic work demands. We tell ourselves, “In order to get everything done that I have to get done. In order to meet expectations, meet the deadlines, then I have to overwork.” And this is not true. If you believe in it, you are participating in the lie, you’re enabling it. Start to claim yourself. Start to claim your time. Don’t participate. Don’t believe that there is a magic workaround or gimmick that’s going to enable you to stay in a toxic work environment and reshuffle the deck. What happens is in that shuffling process you continue to overcompensate, overdo, overextend yourself—and you’ve moved from overwork on the face of things to complicating your life. This is common. Liberate yourself. You can be free. It’s not about overwork.

Pamela Wible: And here’s the thing that really is almost humorous. What physicians do when they’re overworked, their solution for overwork—is to overwork. Right? They’re like, “Okay. I’m exhausted. I’m tired. My office isn’t working. I’ll get another phone line. I’ll get two more receptionists. I’ll add three more patients per day.” Your solution to overwork, if it’s overwork, is probably not going to work.

The interview is here.

The Moral Fabric and Social Norms

AEI Political Report
Volume 14, Issue 1
January 2018

A large majority now, as in the past, say moral values in the country are getting worse. Social conservatives, moderates, and liberals agree. At the same time, however, as these pages show, people accept some behaviors once thought wrong. Later in this issue, we look at polls on women’s experiences with sexual harassment, a topic which has drawn public scrutiny following recent allegations of misconduct against high profile individuals.

Q: Right now, do you think . . . ?




Tuesday, January 23, 2018

President Trump’s Mental Health — Is It Morally Permissible for Psychiatrists to Comment?

Claire Pouncey
The New England Journal of Medicine
December 27, 2107

Ralph Northam, a pediatric neurologist who was recently elected governor of Virginia, distinguished himself during the gubernatorial race by calling President Donald Trump a “narcissistic maniac.” Northam drew criticism for using medical diagnostic terminology to denounce a political figure, though he defended the terminology as “medically correct.” The term isn’t medically correct — “maniac” has not been a medical term for well over a century — but Northam’s use of it in either medical or political contexts would not be considered unethical by his professional peers.

For psychiatrists, however, the situation is different, which is why many psychiatrists and other mental health professionals have refrained from speculating about Trump’s mental health. But in October, psychiatrist Bandy Lee published a collection of essays written largely by mental health professionals who believe that their training and expertise compel them to warn the public of the dangers they see in Trump’s psychology. The Dangerous Case of Donald Trump: 27 Psychiatrists and Mental Health Experts Assess a President rejects the position of the American Psychiatric Association (APA) that psychiatrists should never offer diagnostic opinions about persons they have not personally examined. Past APA president Jeffrey Lieberman has written in Psychiatric News that the book is “not a serious, scholarly, civic-minded work, but simply tawdry, indulgent, fatuous tabloid psychiatry.” I believe it shouldn’t be dismissed so quickly.

The article is here.

Best Practices for School-Based Moral Education

Peter Meindl, Abigail Quirk, Jesse Graham
Policy Insights from the Behavioral and Brain Sciences 
First Published December 21, 2017

Abstract

How can schools help students build moral character? One way is to use prepackaged moral education programs, but as we report here, their effectiveness tends to be limited. What, then, can schools do? We took two steps to answer this question. First, we consulted more than 50 of the world’s leading social scientists. These scholars have spent decades studying morality, character, or behavior change but until now few had used their expertise to inform moral education practices. Second, we searched recent studies for promising behavior change techniques that apply to school-based moral education. These two lines of investigation congealed into two recommendations: Schools should place more emphasis on hidden or “stealthy” moral education practices and on a small set of “master” virtues. Throughout the article, we describe practices flowing from these recommendations that could improve both the effectiveness and efficiency of school-based moral education.

The article is here.

Monday, January 22, 2018

Science and Morality

Jim Kozubek
Scientific American
Originally published December 27, 2017

Here is an excerpt:

The argument that genes embody a sort of sacrosanct character that should not be interfered with is not too compelling, since artifacts of viruses are burrowed in our genomes, and genes undergo mutations with each passing generation. Even so, the principle that all life has inherent dignity is hardly a bad thought and provides a necessary counterbalance to the impulse to use in vitro techniques and CRISPR to alter any gene variant to reduce risk or enhance features, none of which are more or less perfect but variations in human evolution.

Indeed, the question of dignity is thornier than we might imagine, since science tends to challenge the belief in abstract or enduring concepts of value. How to uphold beliefs or a sense of dignity seems ever confusing and appears to throw us up against an age of radical nihilism as scientists today are using the gene editing tool CRISPR to do things such as tinker with the color of butterfly wings, genetically alter pigs, even humans. If science is a method of truth-seeking, technology its mode of power and CRISPR is a means to the commodification of life. It also raises the possibility this power can erode societal trust. 

The article is here.

Should US Physicians Support the Decriminalization of Commercial Sex?

Emily F. Rothman
AMA Journal of Ethics. January 2017, Volume 19, Number 1: 110-121.

Abstract

According to the World Health Organization, “commercial sex” is the exchange of money or goods for sexual services, and this term can be applied to both consensual and nonconsensual exchanges. Some nonconsensual exchanges qualify as human trafficking. Whether the form of commercial sex that is also known as prostitution should be decriminalized is being debated contentiously around the world, in part because the percentage of commercial sex exchanges that are consensual as opposed to nonconsensual, or trafficked, is unknown. This paper explores the question of decriminalization of commercial sex with reference to the bioethical principles of beneficence, nonmaleficence, and respect for autonomy. It concludes that though there is no perfect policy solution to the various ethical problems associated with commercial sex that can arise under either criminalized or decriminalized conditions, the Nordic model offers several potential advantages. This model criminalizes the buying of sex and third-party brokering of sex (i.e., pimping) but exempts sex sellers (i.e., prostitutes, sex workers) from criminal penalties. However, ongoing support for this type of policy should be contingent upon positive results over time.

The article is here.

Sunday, January 21, 2018

Cognitive Economics: How Self-Organization and Collective Intelligence Works

Geoff Mulgan
evonomics.com
Originally published December 22, 2017

Here are two excerpts:

But self-organization is not an altogether-coherent concept and has often turned out to be misleading as a guide to collective intelligence. It obscures the work involved in organization and in particular the hard work involved in high-dimensional choices. If you look in detail at any real example—from the family camping trip to the operation of the Internet, open-source software to everyday markets, these are only self-organizing if you look from far away. Look more closely and different patterns emerge. You quickly find some key shapers—like the designers of underlying protocols, or the people setting the rules for trading. There are certainly some patterns of emergence. Many ideas may be tried and tested before only a few successful ones survive and spread. To put it in the terms of network science, the most useful links survive and are reinforced; the less useful ones wither. The community decides collectively which ones are useful. Yet on closer inspection, there turn out to be concentrations of power and influence even in the most decentralized communities, and when there’s a crisis, networks tend to create temporary hierarchies—or at least the successful ones do—to speed up decision making. As I will show, almost all lasting examples of social coordination combine some elements of hierarchy, solidarity, and individualism.

(cut)

Here we see a more common pattern. The more dimensional any choice is, the more work is needed to think it through. If it is cognitively multidimensional, we may need many people and more disciplines to help us toward a viable solution. If it is socially dimensional, then there is no avoiding a good deal of talk, debate, and argument on the way to a solution that will be supported. And if the choice involves long feedback loops, where results come long after actions have been taken, there is the hard labor of observing what actually happens and distilling conclusions. The more dimensional the choice in these senses, the greater the investment of time and cognitive energy needed to make successful decisions.

Again, it is possible to overshoot: to analyze a problem too much or from too many angles, bring too many people into the conversation, or wait too long for perfect data and feedback rather than relying on rough-and-ready quicker proxies. All organizations struggle to find a good enough balance between their allocation of cognitive resources and the pressures of the environment they’re in. But the long-term trend of more complex societies is to require ever more mediation and intellectual labor of this kind.

The article is here.

Saturday, January 20, 2018

Exploiting Risk–Reward Structures in Decision Making under Uncertainty

Christina Leuker Thorsten Pachur Ralph Hertwig Timothy Pleskac
PsyArXiv Preprints
Posted December 21, 2017

Abstract

People often have to make decisions under uncertainty — that is, in situations where the probabilities of obtaining a reward are unknown or at least difficult to ascertain. Because outside the laboratory payoffs and probabilities are often correlated, one solution to this problem might be to infer the probability from the magnitude of the potential reward. Here, we investigated how the mind may implement such a solution: (1) Do people learn about risk–reward relationships from the environment—and if so, how? (2) How do learned risk–reward relationships impact preferences in decision-making under uncertainty? Across three studies (N = 352), we found that participants learned risk–reward relationships after being exposed to choice environments with a negative, positive, or uncorrelated risk–reward relationship. They learned the associations both from gambles with explicitly stated payoffs and probabilities (Experiments 1 & 2) and from gambles about epistemic
events (Experiment 3). In subsequent decisions under uncertainty, participants exploited the learned association by inferring probabilities from the magnitudes of the payoffs. This inference systematically influenced their preferences under uncertainty: Participants who learned a negative risk–reward relationship preferred the uncertain option over a smaller sure option for low payoffs, but not for high payoffs. This pattern reversed in the positive condition and disappeared in the uncorrelated condition. This adaptive change in preferences is consistent with the use of the risk–reward heuristic.

From the Discussion Section:

Risks and rewards are the pillars of preference. This makes decision making under uncertainty a vexing problem as one of those pillars—the risks, or probabilities—is missing (Knight, 1921; Luce & Raiffa, 1957). People are commonly thought to deal with this problem by intuiting subjective probabilities from their knowledge and memory (Fox & Tversky, 1998; Tversky & Fox, 1995) or by estimating statistical probabilities from samples of information (Hertwig & Erev, 2009). Our results support another ecologically grounded solution, namely, that people estimate the missing probabilities from their immediate choice environments via their learned risk–reward relationships.

The research is here.

Friday, January 19, 2018

Why banning autonomous killer robots wouldn’t solve anything

Susanne Burri and Michael Robillard
aeon.com
Originally published December 19, 2017

Here is an excerpt:

For another thing, it is naive to assume that we can enjoy the benefits of the recent advances in artificial intelligence (AI) without being exposed to at least some downsides as well. Suppose the UN were to implement a preventive ban on the further development of all autonomous weapons technology. Further suppose – quite optimistically, already – that all armies around the world were to respect the ban, and abort their autonomous-weapons research programmes. Even with both of these assumptions in place, we would still have to worry about autonomous weapons. A self-driving car can be easily re-programmed into an autonomous weapons system: instead of instructing it to swerve when it sees a pedestrian, just teach it to run over the pedestrian.

To put the point more generally, AI technology is tremendously useful, and it already permeates our lives in ways we don’t always notice, and aren’t always able to comprehend fully. Given its pervasive presence, it is shortsighted to think that the technology’s abuse can be prevented if only the further development of autonomous weapons is halted. In fact, it might well take the sophisticated and discriminate autonomous-weapons systems that armies around the world are currently in the process of developing if we are to effectively counter the much cruder autonomous weapons that are quite easily constructed through the reprogramming of seemingly benign AI technology such as the self-driving car.

The article is here.