Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, September 14, 2019

Do People Want to Be More Moral?

Jessie Sun and Geoffrey Goodwin
PsyArXiv Preprints
Originally posted August 26, 2019

Abstract

Most people want to change some aspects of their personality, but does this phenomenon extend to moral character, and to close others? Targets (N = 800) and well-acquainted informants (N = 958) rated targets’ personality traits and reported how much they wanted the target to change each trait. Targets and informants reported a lower desire to change more morally-relevant traits (e.g., honesty, compassion), compared to less morally-relevant traits (e.g., anxiety, sociability). Moreover, although targets and informants generally wanted targets to improve more on traits that targets had less desirable levels of, targets’ moral change goals were less calibrated to their current levels. Finally, informants wanted targets to change in similar ways, but to a lesser extent, than targets themselves did. These findings shed light on self–other similarities and asymmetries in personality change goals, and suggest that the general desire for self-improvement may be less prevalent in the moral domain.

From the Discussion:

Why don’t people particularly want to be more moral? One possibility is that people see less room for improvement on moral traits, especially given the relatively high ratings on these traits.  Our data cannot speak directly to this possibility, because people might not be claiming that they have the lowest or highest possible levels of each trait when they “strongly disagree” or “strongly agree” with each trait description (Blanton & Jaccard, 2006). Testing this idea would therefore require a more direct measure of where people think they stand, relative to these extremes.

A related possibility is that people are less motivated to improve moral traits because they already see themselves as being quite high on such traits, and therefore morally “good enough”—even if they think they could be morally better (see Schwitzgebel, 2019). Consistent with this idea, supplemental analyses showed that people are less inclined to change the traits that they rate themselves higher on, compared to traits that they rate themselves lower on. However, even controlling for current levels, people are still less inclined to change more morally-relevant traits(see Supplemental Materialfor these within-person analyses), suggesting that additional psychological factors might reducepeople’s desire to change morally-relevant traits.One additional possibility is that people are more motivated to change in ways that will improve their own well-being(Hudson & Fraley, 2016). Whereas becoming less anxious has obvious personal benefits, people might believe that becoming more moral would result in few personal benefits (or even costs).

The research is here.

Friday, September 13, 2019

Intention matters to make you (im)moral: Positive-negative asymmetry in moral character evaluations

Paula Yumi Hirozawa, M. Karasawa & A. Matsuo
(2019) The Journal of Social Psychology
DOI: 10.1080/00224545.2019.1653254

Abstract

Is intention, even if unfulfilled, enough to make a person appear to be good or bad? In this study, we investigated the influence of unfulfilled intentions of an agent on subsequent moral character evaluations. We found a positive-negative asymmetry in the effect of intentions. Factual information concerning failure to fulfill a positive intention mitigated the morality judgment of the actor, yet this mitigation was not as evident for the negative vignettes. Participants rated an actor who failed to fulfill their negative intention as highly immoral, as long as there was an external explanation to its unfulfillment. Furthermore, both emotional and cognitive (i.e., informativeness) processes mediated the effect of negative intention on moral character. For the positive intention, there was a significant mediation by emotions, yet not by informativeness. Results evidence the relevance of mental states in moral character evaluations and offer affective and cognitive explanations to the asymmetry.

Conclusion

In this study, we investigated whether intentions by themselves are enough to make an agent appear to be good or bad. The answer is yes, but with a detail. We found negative intentions are more indicative of an immoral character than positive intentions are diagnostic of moral character. Simply intending to offer cookies should not, after all, make a neighbor particularly virtuous, unless the intention is acted out. The positive-negative asymmetry demonstrated in the present study may capture a fundamental aspect of people’s moral judgments, particularly for disposition-based evaluations.

The dynamics of social support among suicide attempters: A smartphone-based daily diary study

Coppersmith, D.D.L.; Kleiman, E.M.; Glenn, C.R.; Millner, A.J.; Nock, M.K.
Behaviour Research and Therapy (2018)

Abstract

Decades of research suggest that social support is an important factor in predicting suicide risk and resilience. However, no studies have examined dynamic fluctuations in day-by-day levels of perceived social support. We examined such fluctuations over 28 days among a sample of 53 adults who attempted suicide in the past year (992 total observations). Variability in social support was analyzed with between-person intraclass correlations and root mean square of successive differences. Multi-level models were conducted to determine the association between social support and suicidal ideation. Results revealed that social support varies considerably from day to day with 45% of social support ratings differing by at least one standard deviation from the prior assessment. Social support is inversely associated with same-day and next-day suicidal ideation, but not with next-day suicidal ideation after adjusting for same-day suicidal ideation (i.e., not with daily changes in suicidal ideation). These results suggest that social support is a time-varying protective factor for suicidal ideation.

The research is here.

Thursday, September 12, 2019

Americans Have Shifted Dramatically on What Values Matter Most

Chad Day
The Wall Street Journal
Originally published August 25, 2019

The values that Americans say define the national character are changing, as younger generations rate patriotism, religion and having children as less important to them than did young people two decades ago, a new Wall Street Journal/NBC News survey finds.

The poll is the latest sign of difficulties the 2020 presidential candidates will likely face in crafting a unifying message for a country divided over personal principles and views of an increasingly diverse society.

When the Journal/NBC News survey asked Americans 21 years ago to say which values were most important to them, strong majorities picked the principles of hard work, patriotism, commitment to religion and the goal of having children.

Today, hard work remains atop the list, but the shares of Americans listing the other three values have fallen substantially, driven by changing priorities of people under age 50.

Some 61% in the new survey cited patriotism as very important to them, down 9 percentage points from 1998, while 50% cited religion, down 12 points. Some 43% placed a high value on having children, down 16 points from 1998.

Views varied sharply by age. Among people 55 and older, for example, nearly 80% said patriotism was very important, compared with 42% of those ages 18-38—the millennial generation and older members of Gen-Z.

Two-thirds of the older group cited religion as very important, compared with fewer than one-third of the younger group.

“There’s an emerging America where issues like children, religion and patriotism are far less important. And in America, it’s the emerging generation that calls the shots about where the country is headed,” said Republican pollster Bill McInturff, who conducted the survey with Democratic pollster Jeff Horwitt.

The info is here.

Morals Ex Machina: Should We Listen To Machines For Moral Guidance?

Michael Klenk
3QuarksDaily.com
Originally posted August 12, 2019

Here are two excerpts:

The prospects of artificial moral advisors depend on two core questions: Should we take ethical advice from anyone anyway? And, if so, are machines any good at morality (or, at least, better than us, so that it makes sense that we listen to them)? I will only briefly be concerned with the first question and then turn to the second question at length. We will see that we have to overcome several technical and practical barriers before we can reasonably take artificial moral advice.

(cut)

The limitation of ethically aligned artificial advisors raises an urgent practical problem, too. From a practical perspective, decisions about values and their operationalisation are taken by the machine’s designers. Taking their advice means buying into preconfigured ethical settings. These settings might not agree with you, and they might be opaque so that you have no way of finding out how specific values have been operationalised. This would require accepting the preconfigured values on blind trust. The problem already exists in machines that give non-moral advice, such as mapping services. For example, when you ask your phone for the way to the closest train station, the device will have to rely on various assumptions about what path you can permissibly take and it may also consider commercial interests of the service provider. However, we should want the correct moral answer, not what the designers of such technologies take that to be.

We might overcome these practical limitations by letting users input their own values and decide about their operationalisation themselves. For example, the device might ask users a series of questions to determine their ethical views and also require them to operationalise each ethical preference precisely. A vegetarian might, for instance, have to decide whether she understands ‘vegetarianism’ to encompass ‘meat-free meals’ or ‘meat-free restaurants.’ Doing so would give us personalised moral advisors that could help us live more consistently by our own ethical rules.

However, it would then be unclear how specifying our individual values, and their operationalisation improves our moral decision making instead of merely helping individuals to satisfy their preferences more consistently.

The info is here.

Wednesday, September 11, 2019

Assessment of Patient Nondisclosures to Clinicians of Experiencing Imminent Threats

Levy AG, Scherer AM, Zikmund-Fisher BJ, Larkin K, Barnes GD, Fagerlin A.
JAMA Netw Open. Published online August 14, 20192(8):e199277.
doi:10.1001/jamanetworkopen.2019.9277

Question 

How common is it for patients to withhold information from clinicians about imminent threats that they face (depression, suicidality, abuse, or sexual assault), and what are common reasons for nondisclosure?

Findings 

This survey study, incorporating 2 national, nonprobability, online surveys of a total of 4,510 US adults, found that at least one-quarter of participants who experienced each imminent threat reported withholding this information from their clinician. The most commonly endorsed reasons for nondisclosure included potential embarrassment, being judged, or difficult follow-up behavior.

Meaning

These findings suggest that concerns about potential negative repercussions may lead many patients who experience imminent threats to avoid disclosing this information to their clinician.

Conclusion

This study reveals an important concern about clinician-patient communication: if patients commonly withhold information from clinicians about significant threats that they face, then clinicians are unable to identify and attempt to mitigate these threats. Thus, these results highlight the continued need to develop effective interventions that improve the trust and communication between patients and their clinicians, particularly for sensitive, potentially life-threatening topics.

How The Software Industry Must Marry Ethics With Artificial Intelligence

Christian Pedersen
Forbes.com
Originally posted July 15, 2019

Here is an excerpt:

Companies developing software used to automate business decisions and processes, military operations or other serious work need to address explainability and human control over AI as they weave it into their products. Some have started to do this.

As AI is introduced into existing software environments, those application environments can help. Many will have established preventive and detective controls and role-based security. They can track who made what changes to processes or to the data that feeds through those processes. Some of these same pathways can be used to document changes made to goals, priorities or data given to AI.

But software vendors have a greater opportunity. They can develop products that prevent bad use of AI, but they can also use AI to actively protect and aid people, business and society. AI can be configured to solve for anything from overall equipment effectiveness or inventory reorder point to yield on capital. Why not have it solve for nonfinancial, corporate social responsibility metrics like your environmental footprint or your environmental or economic impact? Even a common management practice like using a balanced scorecard could help AI strive toward broader business goals that consider the well-being of customers, employees, customers, suppliers and other stakeholders.

The info is here.

Tuesday, September 10, 2019

Physicians Talking With Their Partners About Patients

Morris NP, & Eshel N.
JAMA. Published online August 16, 2019.
doi:10.1001/jama.2019.12293

Maintaining patient privacy is a fundamental responsibility for physicians. However, physicians often share their lives with partners or spouses. A 2018 survey of 15 069 physicians found that 85% were currently married or living with a partner, and when physicians come home from work, their partners might reasonably ask about their day. Physicians are supposed to keep patient information private in almost all circumstances, but are these realistic expectations for physicians and their partners? Might this expectation preclude potential benefits of these conversations?

In many cases, physician disclosure of clinical information to partners may violate patients’ trust. Patient privacy is so integral to the physician role that the Hippocratic oath notes, “And whatsoever I shall see or hear in the course of my profession...if it be what should not be published abroad, I will never divulge, holding such things to be holy secrets.” Whether over routine health care matters, such as blood pressure measurements; or potentially sensitive topics, such as end-of-life decisions, concerns of abuse, or substance use, patients expect their interactions with physicians to be kept in the strictest confidence. No hospital or clinic provides patients with the disclaimer, “Your private health information may be shared over the dinner table.” If a patient learned that his physician shared information about his medical encounters without permission, the patient may be far less likely to trust the physician or participate in ongoing care.

Physicians who share details with their partners about patients may not anticipate the effects of doing so. For instance, a physician’s partner could recognize the patient being discussed, whether from social connections or media coverage. After sharing patient information, physicians lose control of this information, and their partners, who may have less training about medical privacy, could unintentionally reveal sensitive patient information during future conversations.

The info is here.

Can Ethics Be Taught?

Peter Singer
Project Syndicate
Originally published August 7, 2019

Can taking a philosophy class – more specifically, a class in practical ethics – lead students to act more ethically?

Teachers of practical ethics have an obvious interest in the answer to that question. The answer should also matter to students thinking of taking a course in practical ethics. But the question also has broader philosophical significance, because the answer could shed light on the ancient and fundamental question of the role that reason plays in forming our ethical judgments and determining what we do.

Plato, in the Phaedrus, uses the metaphor of a chariot pulled by two horses; one represents rational and moral impulses, the other irrational passions or desires. The role of the charioteer is to make the horses work together as a team. Plato thinks that the soul should be a composite of our passions and our reason, but he also makes it clear that harmony is to be found under the supremacy of reason.

In the eighteenth century, David Hume argued that this picture of a struggle between reason and the passions is misleading. Reason on its own, he thought, cannot influence the will. Reason is, he famously wrote, “the slave of the passions.”

The info is here.