Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, November 18, 2019

Suicide Has Been Deadlier Than Combat for the Military

Carol Giacomo
The New York Times
Originally published November 1, 2019

Here are two excerpts:

The data for veterans is also alarming.

In 2016, veterans were one and a half times more likely to kill themselves than people who hadn’t served in the military, according to the House Committee on Oversight and Reform.

Among those ages 18 to 34, the rate went up nearly 80 percent from 2005 to 2016.

The risk nearly doubles in the first year after a veteran leaves active duty, experts say.

The Pentagon this year also reported on military families, estimating that in 2017 there were 186 suicide deaths among military spouses and dependents.

(cut)

Experts say suicides are complex, resulting from many factors, notably impulsive decisions with little warning. Pentagon officials say a majority of service members who die by suicide do not have mental illness. While combat is undoubtedly high stress, there are conflicting views on whether deployments increase risk.

Where there seems to be consensus is that high-quality health care and keeping weapons out of the hands of people in distress can make a positive difference.

Studies show that the Department of Veterans Affairs provides high-quality care, and its Veterans Crisis Line “surpasses most crisis lines” operating today, according to Terri Tanielian, a researcher with the RAND Corporation. (The Veterans Crisis Line is staffed 24/7 at 800-273-8255, press 1. Services also are available online or by texting 838255.)

But Veterans Affairs often can’t accommodate all those needing help, resulting in patients being sent to community-based mental health professionals who lack the training to deal with service members.

The info is here.

Understanding behavioral ethics can strengthen your compliance program

Jeffrey Kaplan
The FCPA Blog
Originally posted October 21, 2019

Behavioral ethics is a well-known field of social science which shows how — due to various cognitive biases — “we are not as ethical as we think.” Behavioral compliance and ethics (which is less well known) attempts to use behavioral ethics insights to develop and maintain effective compliance programs. In this post I explore some of the ways that this can be done.

Behavioral C&E should be viewed on two levels. The first could be called specific behavioral C&E lessons, meaning enhancements to the various discrete C&E program elements — e.g., risk assessment, training — based on behavioral ethics insights.   Several of these are discussed below.

The second — and more general — aspect of behavioral C&E is the above-mentioned overarching finding that we are not as ethical as we think. The importance of this general lesson is based on the notion that the greatest challenges to having effective C&E programs in organizations is often more about the “will” than the “way.”

That is, what is lacking in many business organizations is an understanding that strong C&E is truly necessary. After all, if we are as ethical than we think, then effective risk mitigation would be just a matter of finding the right punishment for an offense and the power of logical thinking would do the rest. Behavioral ethics teaches that that assumption is ill-founded.

The info is here.

Sunday, November 17, 2019

The Psychology of Existential Risk: Moral Judgments about Human Extinction

Stefan Schubert, Lucius Caviola & Nadira S. Faber
Scientific Reports volume 9, Article number: 15100 (2019)

Abstract

The 21st century will likely see growing risks of human extinction, but currently, relatively small resources are invested in reducing such existential risks. Using three samples (UK general public, US general public, and UK students; total N = 2,507), we study how laypeople reason about human extinction. We find that people think that human extinction needs to be prevented. Strikingly, however, they do not think that an extinction catastrophe would be uniquely bad relative to near-extinction catastrophes, which allow for recovery. More people find extinction uniquely bad when (a) asked to consider the extinction of an animal species rather than humans, (b) asked to consider a case where human extinction is associated with less direct harm, and (c) they are explicitly prompted to consider long-term consequences of the catastrophes. We conclude that an important reason why people do not find extinction uniquely bad is that they focus on the immediate death and suffering that the catastrophes cause for fellow humans, rather than on the long-term consequences. Finally, we find that (d) laypeople—in line with prominent philosophical arguments—think that the quality of the future is relevant: they do find extinction uniquely bad when this means forgoing a utopian future.

Discussion

Our studies show that people find that human extinction is bad, and that it is important to prevent it. However, when presented with a scenario involving no catastrophe, a near-extinction catastrophe and an extinction catastrophe as possible outcomes, they do not see human extinction as uniquely bad compared with non-extinction. We find that this is partly because people feel strongly for the victims of the catastrophes, and therefore focus on the immediate consequences of the catastrophes. The immediate consequences of near-extinction are not that different from those of extinction, so this naturally leads them to find near-extinction almost as bad as extinction. Another reason is that they neglect the long-term consequences of the outcomes. Lastly, their empirical beliefs about the quality of the future make a difference: telling them that the future will be extraordinarily good makes more people find extinction uniquely bad.

The research is here.

Saturday, November 16, 2019

Moral grandstanding in public discourse: Status-seeking motives as a potential explanatory mechanism in predicting conflict

Grubbs JB, Warmke B, Tosi J, James AS, Campbell WK
(2019) PLoS ONE 14(10): e0223749.
https://doi.org/10.1371/journal.pone.0223749

Abstract

Public discourse is often caustic and conflict-filled. This trend seems to be particularly evident when the content of such discourse is around moral issues (broadly defined) and when the discourse occurs on social media. Several explanatory mechanisms for such conflict have been explored in recent psychological and social-science literatures. The present work sought to examine a potentially novel explanatory mechanism defined in philosophical literature: Moral Grandstanding. According to philosophical accounts, Moral Grandstanding is the use of moral talk to seek social status. For the present work, we conducted six studies, using two undergraduate samples (Study 1, N = 361; Study 2, N = 356); a sample matched to U.S. norms for age, gender, race, income, Census region (Study 3, N = 1,063); a YouGov sample matched to U.S. demographic norms (Study 4, N = 2,000); and a brief, one-month longitudinal study of Mechanical Turk workers in the U.S. (Study 5, Baseline N = 499, follow-up n = 296), and a large, one-week YouGov sample matched to U.S. demographic norms (Baseline N = 2,519, follow-up n = 1,776). Across studies, we found initial support for the validity of Moral Grandstanding as a construct. Specifically, moral grandstanding motivation was associated with status-seeking personality traits, as well as greater political and moral conflict in daily life.

Conclusion

Public discourse regarding morally charged topics is prone to conflict and polarization, particularly on social media platforms that tend to facilitate ideological echo chambers. The present study introduces an interdisciplinary construct called Moral Grandstanding as possible a contributing factor to this phenomenon. MG links various domains of psychology with moral philosophy to describe the use of public moral speech to enhance one’s status or image in the eyes of others. Within the present work, we focused on the motivation to engage in MG. Specifically, MG Motivation is framed as an expression of status-seeking drives in the domain of public discourse. Self-reported motivations underlying grandstanding behaviors seem to be consistent with the construct of status-seeking more broadly, seeming to represent prestige and dominance striving, both of which were found to be associated with greater interpersonal conflict and polarization. These results were consistently replicated in samples of U.S. undergraduates and nationally representative cross-sectional of U.S. residents, and longitudinal studies of adults in the U.S. Collectively, these results suggest that MG Motivation is a useful psychological phenomenon that has potential to aid our understanding of the intraindividual mechanisms driving caustic public discourse.

Friday, November 15, 2019

Gartner Fellow discusses ethics in artificial intelligence

Teena Maddox
techrepublic.com
Originally published October 28, 2019

Here is an excerpt:

There are tons of ways you can use AI ethically and also unethically. One way that is typically being cited is, for instance, using attributes of people that shouldn't be used. For instance, when granting somebody a mortgage or access to something or making other decisions. Racial profiling is typically mentioned as an example. So, you need to be mindful which attributes are being used for making decisions. How do the algorithms learn? Other ways of abuse of AI is, for instance, with autonomous killer drones. Would we allow algorithms to decide who gets bombed by a drone and who does not? And most people seem to agree that autonomous killer drones are not a very good idea.

The most important thing that a developer can do in order to create ethical AI is to not think of this as technology, but an exercise in self-reflection. Developers have certain biases. They have certain characteristics themselves. For instance, developers are keen to search for the optimal solution of a problem; it is built into their brains. But ethics is a very pluralistic thing. There's different people who have different ideas. There is not one optimal answer of what is good and bad. First and foremost, developers should be aware of their own ethical biases of what they think is good and bad and create an environment of diversity where they test those assumptions and where they test their results. The the developer brain isn't the only brain or type of brain that is out there, to say the least.

So, AI and ethics is really a story of hope. It's for the very first time that a discussion of ethics is taking place before the widespread implementation, unlike in previous rounds where the ethical considerations were taking place after the effects.

The info is here.

Is Moral Relativism Really a Problem?

Is Moral Relativism Really a Problem?Thomas Polzler
Scientific American Blog
Originally published October 16, 2019

Here is an excerpt:

Warnings against moral relativism are most often based on theoretical speculation. Critics consider the view’s nature and add certain assumptions about human psychology. Then they infer how being a relativist might affect a person’s behavior. For example, for a relativist, even actions such as murder or rape can never be really or absolutely wrong; they are only wrong to the extent that the relativist or most members of his or her culture believe them to be so.

One may therefore worry that relativists are less motivated to refrain from murdering and raping than people who regard these actions as objectively wrong. While this scenario may sound plausible, however, it is important to note that relativism’s effects can only ultimately be determined by relevant studies.

So far, scientific investigations do not support the suspicion that moral relativism is problematic. True, there are two studies that do suggest such a conclusion. In one of them, participants were led to think about morality in either relativist or objectivist terms. It turned out that subjects in the relativist condition were more likely to cheat in a lottery and to state that they would be willing to steal than those in the objectivist condition. In the other study, participants who had been exposed to relativist ideas were less likely to donate to charity than those who had been exposed to objectivist ones.

That said, there is also evidence that associates moral relativism with positive behaviors. In one of her earlier studies, Wright and her colleagues informed their participants that another person disagreed with one of their moral judgments. Then the researchers measured the subjects’ degree of tolerance for this person’s divergent moral view. For example, participants were asked how willing they would be to interact with the person, how willing they would be to help him or her and how comfortable they generally were with another individual denying one of their moral judgments. It turned out that subjects with relativist leanings were more tolerant toward the disagreeing person than those who had tended toward objectivism.

The info is here.

Thursday, November 14, 2019

Assessing risk, automating racism

Embedded ImageRuha Benjamin
Science  25 Oct 2019:
Vol. 366, Issue 6464, pp. 421-422

Here is an excerpt:

Practically speaking, their finding means that if two people have the same risk score that indicates they do not need to be enrolled in a “high-risk management program,” the health of the Black patient is likely much worse than that of their White counterpart. According to Obermeyer et al., if the predictive tool were recalibrated to actual needs on the basis of the number and severity of active chronic illnesses, then twice as many Black patients would be identified for intervention. Notably, the researchers went well beyond the algorithm developers by constructing a more fine-grained measure of health outcomes, by extracting and cleaning data from electronic health records to determine the severity, not just the number, of conditions. Crucially, they found that so long as the tool remains effective at predicting costs, the outputs will continue to be racially biased by design, even as they may not explicitly attempt to take race into account. For this reason, Obermeyer et al. engage the literature on “problem formulation,” which illustrates that depending on how one defines the problem to be solved—whether to lower health care costs or to increase access to care—the outcomes will vary considerably.

Cooperation and Learning in Unfamiliar Situations

McAuliffe, W. H. B., Burton-Chellew, M. N., &
McCullough, M. E. (2019).
Current Directions in Psychological Science, 
28(5), 436–440. https://doi.org/10.1177/0963721419848673

Abstract

Human social life is rife with uncertainty. In any given encounter, one can wonder whether cooperation will generate future benefits. Many people appear to resolve this dilemma by initially cooperating, perhaps because (a) encounters in everyday life often have future consequences, and (b) the costs of alienating oneself from long-term social partners often outweighed the short-term benefits of acting selfishly over our evolutionary history. However, because cooperating with other people does not always advance self-interest, people might also learn to withhold cooperation in certain situations. Here, we review evidence for two ideas: that people (a) initially cooperate or not depending on the incentives that are typically available in their daily lives and (b) also learn through experience to adjust their cooperation on the basis of the incentives of unfamiliar situations. We compare these claims with the widespread view that anonymously helping strangers in laboratory settings is motivated by altruistic desires. We conclude that the evidence is more consistent with the idea that people stop cooperating in unfamiliar situations because they learn that it does not help them, either financially or through social approval.

Conclusion

Experimental economists have long emphasized the role of learning in social decision-making (e.g., Binmore, 1999). However, cooperation researchers have only recently considered how peoples’ past social interactions shape their expectations in novel social situations. An important lesson from the research reviewed here is that people’s behavior in any single situation is not necessarily a direct read-out of how selfish or altruistic they are, especially if the situation’s incentives differ from what they normally encounter in everyday life.

Wednesday, November 13, 2019

Dynamic Moral Judgments and Emotions

Magda Osman
Published Online June 2015 in SciRes.
http://www.scirp.org/journal/psych

Abstract

We may experience strong moral outrage when we read a news headline that describes a prohibited action, but when we gain additional information by reading the main news story, do our emotional experiences change at all, and if they do in what way do they change? In a single online study with 80 participants the aim was to examine the extent to which emotional experiences (disgust, anger) and moral judgments track changes in information about a moral scenario. The evidence from the present study suggests that we systematically adjust our moral judgments and our emotional experiences as a result of exposure to further information about the morally dubious action referred to in a moral scenario. More specifically, the way in which we adjust our moral judgments and emotions appears to be based on information signalling whether a morally dubious act is permitted or prohibited.

From the Discussion

The present study showed that moral judgments changed in response to different details concerning the moral scenarios, and while participants gave the most severe judgments for the initial limited information regarding the scenario (i.e. the headline), they adjusted the severity of their judgments downwards as more information was provided (i.e. main story, conclusion). In other words, when context was provided for why a morally dubious action was carried out, people used this to inform their later judgments and consciously integrated this new information into their judgments of the action. Crucially, this reflects the fact that judgments and emotions are not fixed, and that they are likely to operate on rational processes (Huebner, 2011, 2014; Teper et al., 2015). More to the point, this evidence suggests that there may well be an integrated representation of the moral scenario that is based on informational content as well as personal emotional experiences that signal the valance on which the information should be judged. The evidence from the present study suggests that both moral judgments and emotional experiences change systematically in response to changes in information that critically concern the way in which a morally dubious action should be evaluated.

A pdf can be downloaded here.