Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, October 31, 2020

The new trinity of religious moral character: the Cooperator, the Crusader, and the Complicit

S. Abrams, J. Jackson, & K. Gray
Current Opinion in Psychology 2021, 
40:99–105

Abstract

Does religion make people good or bad? We suggest that there are at least three distinct profiles of religious morality: the Cooperator, the Crusader, and the Complicit. Cooperators forego selfishness to benefit others, crusaders harm outgroups to bolster their own religious community, and the complicit use religion to justify selfish behavior and reduce blame. Different aspects of religion motivate each character: religious reverence makes people cooperators, religious tribalism makes people crusaders, and religious absolution makes people complicit. This framework makes sense of previous research by explaining when and how religion can make people more or less moral.

Highlights

• Different aspects of religion inspire both morality and immorality.

• These distinct influences are summarized through three profiles of moral character.

• The ‘Cooperator’ profile shows how religious reverence encourages people to sacrifice self-interest.

• The ‘Crusader’ profile shows how religious tribalism motivates ingroup loyalty and outgroup hostility.

• The ‘Complicit’ profile shows how religious absolution allows people to justify selfish behavior.

From the Conclusion

Religion and morality are complex, and so is their relationship. This review makes sense of religious and moral complexity through a taxonomy of three moral characters — the Cooperator, the Crusader, and the Complicit — each of which is facilitated by different aspects of religion. Religious reverence encourages people to be cooperators, religious tribalism justifies people to behave like crusaders, and religious absolution allows people to be complicit.

Friday, October 30, 2020

The corporate responsibility facade is finally starting to crumble

Alison Taylor
Yahoo Finance
Originally posted 4 March 20

Here is an excerpt:

Any claim to be a responsible corporation is predicated on addressing these abuses of power. But most companies are instead clinging with remarkable persistence to the façades they’ve built to deflect attention. Compliance officers focus on pleasing regulators, even though there is limited evidence that their recommendations reduce wrongdoing. Corporate sustainability practitioners drown their messages in an alphabet soup of acronyms, initiatives, and alienating jargon about “empowered communities” and “engaged stakeholders,” when both functions are still considered peripheral to corporate strategy.

When reading a corporation’s sustainability report and then comparing it to its risk disclosures—or worse, its media coverage—we might as well be reading about entirely distinct companies. Investors focused on sustainability speak of “materiality” principles, meant to sharpen our focus on the most relevant environmental, social, and governance (ESG) issues for each industry. But when an issue is “material” enough to threaten core operating models, companies routinely ignore, evade, and equivocate.

Coca-Cola’s most recent annual sustainability report acknowledges its most pressing issue is “obesity concerns and category perceptions.” Accordingly, it highlights its lower-sugar product lines and references responsible marketing. But it continues its vigorous lobbying against soda taxes, and of course continues to make products with known links to obesity and other health problems. Facebook’s sustainability disclosures focus on efforts to fight climate change and improve labor rights in its supply chain, but make no reference to the mental-health impacts of social media or to its role in peddling disinformation and undermining democracy. Johnson and Johnson flags “product quality and safety” as its highest priority issue without mentioning that it is a defendant in criminal litigation over distribution of opioids. UBS touts its sustainability targets but not its ongoing financing of fossil-fuel projects.

Thursday, October 29, 2020

Probabilistic Biases Meet the Bayesian Brain.

Chater N, et al.
Current Directions in Psychological Science. 
2020;29(5):506-512. 
doi:10.1177/0963721420954801

Abstract

In Bayesian cognitive science, the mind is seen as a spectacular probabilistic-inference machine. But judgment and decision-making (JDM) researchers have spent half a century uncovering how dramatically and systematically people depart from rational norms. In this article, we outline recent research that opens up the possibility of an unexpected reconciliation. The key hypothesis is that the brain neither represents nor calculates with probabilities but approximates probabilistic calculations by drawing samples from memory or mental simulation. Sampling models diverge from perfect probabilistic calculations in ways that capture many classic JDM findings, which offers the hope of an integrated explanation of classic heuristics and biases, including availability, representativeness, and anchoring and adjustment.

Introduction

Human probabilistic reasoning gets bad press. Decades of brilliant experiments, most notably by Daniel Kahneman and Amos Tversky (e.g., Kahneman, 2011; Kahneman, Slovic, & Tversky, 1982), have shown a plethora of ways in which people get into a terrible muddle when wondering how probable things are. Every psychologist has learned about anchoring, conservatism, the representativeness heuristic, and many other ways that people reveal their probabilistic incompetence. Creating probability theory in the first place was incredibly challenging, exercising great mathematical minds over several centuries (Hacking, 1990). Probabilistic reasoning is hard, and perhaps it should not be surprising that people often do it badly. This view is the starting point for the whole field of judgment and decision-making (JDM) and its cousin, behavioral economics.

Oddly, though, human probabilistic reasoning equally often gets good press. Indeed, many psychologists, neuroscientists, and artificial-intelligence researchers believe that probabilistic reasoning is, in fact, the secret of human intelligence.

Wednesday, October 28, 2020

Small Victories: Texas social workers will no longer be allowed to discriminate against LGBTQ Texans and people with disabilities

Edgar Walters
Texas Tribune
Originally posted 27 Oct 20

After backlash from lawmakers and advocates, a state board voted Tuesday to undo a rule change that would have allowed social workers to turn away clients who are LGBTQ or have a disability.

The Texas Behavioral Health Executive Council voted unanimously to restore protections for LGBTQ and disabled clients to Texas social workers’ code of conduct just two weeks after removing them.

Gloria Canseco, who was appointed by Gov. Greg Abbott to lead the behavioral health council, expressed regret that the previous rule change was “perceived as hostile to the LGBTQ+ community or to disabled persons.”

“At every opportunity our intent is to prohibit discrimination against any person for any reason,” she said.

Abbott's office recommended earlier this month that the board strip three categories from a code of conduct that establishes when a social worker may refuse to serve someone.


Congratulations to all who help right a wrong in the mental health profession.

Should we campaign against sex robots?

Danaher, J., Earp, B. D., & Sandberg, A. (forthcoming). 
In J. Danaher & N. McArthur (Eds.) 
Robot Sex: Social and Ethical Implications
Cambridge, MA: MIT Press.

Abstract: 

In September 2015 a well-publicised Campaign Against Sex Robots (CASR) was launched. Modelled on the longer-standing Campaign to Stop Killer Robots, the CASR opposes the development of sex robots on the grounds that the technology is being developed with a particular model of female-male relations (the
prostitute-john model) in mind, and that this will prove harmful in various ways. In this chapter, we consider carefully the merits of campaigning against such a technology. We make three main arguments. First, we argue that the particular claims advanced by the CASR are unpersuasive, partly due to a lack of clarity about the campaign’s aims and partly due to substantive defects in the main ethical objections put forward by campaign’s founder(s). Second, broadening our inquiry beyond the arguments proferred by the campaign itself, we argue that it would be very difficult to endorse a general campaign against sex robots unless one embraced a highly conservative attitude towards the ethics of sex, which is likely to be unpalatable to those who are active in the campaign. In making this argument we draw upon lessons
from the campaign against killer robots. Finally, we conclude by suggesting that although a generalised campaign against sex robots is unwarranted, there are legitimate concerns that one can raise about the development of sex robots.

Conclusion

Robots are going to form an increasingly integral part of human social life.  Sex robots are likely to be among them. Though the proponents of the CASR seem deeply concerned by this prospect, we have argued that there is nothing in the nature of sex robots themselves that warrants preemptive opposition to their development.  The arguments of the campaign itself are vague and premised on a misleading
analogy between sex robots and human sex work. Furthermore, drawing upon the example of the Campaign to Stop Killer Robots, we suggest that there are no bad-making properties of sex robots that give rise to similarly serious levels of concern.  The bad-making properties of sex robots are speculative and indirect: preventing their development may not prevent the problems from arising. Preventing the development of killer robots is very different: if you stop the robots you stop the prima facie harm.

In conclusion, we should preemptively campaign against robots when we have reason to think that a moral or practical harm caused by their use can best be avoided or reduced as a result of those efforts. By contrast, to engage in such a campaign as a way of fighting against—or preempting—indirect harms, whose ultimate source is not the technology itself but rather individual choices or broader social institutions, is likely to be a comparative waste of effort.

Tuesday, October 27, 2020

(Peer) group influence on children's prosocial and antisocial behavior

A. Misch & Y. Dunham
OSFHOME

Abstract 

This study investigates the influence of moral in- vs. outgroup behavior on 5-6 and 8-9-year-olds' own moral behavior (N=296). After minimal group assignment, children in Experiment 1 observed adult ingroup or outgroup members engaging in prosocial sharing or antisocial stealing, before they themselves had the opportunity to privately donate stickers or take away stickers from others. Older children shared more than younger children, and prosocial models elicited higher sharing. Surprisingly, group membership had no effect. Experiment 2 investigated the same question using peer models. Children in the younger age group were significantly influenced by ingroup behavior, while older children were not affected by group membership. Additional measures reveal interesting insights into how moral in- and outgroup behavior affects intergroup attitudes, evaluations and choices.

From the Discussion

Thus, while results of our main measure generally support the hypothesis that children are susceptible to social influence, we found that children are not blindly conformist; rather, in contrast to previous research (Wilks et al., 2019) we found that conformity to antisocial behavior was low in general and restricted to younger children watching peer models.  Vulnerability to peer group influence in younger children has also been reported in previous studies on conformity (Haun & Tomasello, 2011; Engelmann et al., 2016) as well as research demonstrating a primacy of group interests over moral concerns (Misch et al., 2018). Thus, our study highlights the younger age group as a time in children’s development in which they seem to be particularly sensitive to peer influences, for better or worse, perhaps indicating a sort of “sensitive period” in which children are working to extract the norms embedded in peer behavior. 

Monday, October 26, 2020

Artificial Intelligence and the Limits of Legal Personality

Chesterman, Simon, (August 28, 2020). 
Forthcoming in 
International & Comparative Law Quarterly
NUS Law Working Paper No. 2020/025

Abstract

As artificial intelligence (AI) systems become more sophisticated and play a larger role in society, arguments that they should have some form of legal personality gain credence. It has been suggested that this will fill an accountability gap created by the speed, autonomy, and opacity of AI. In addition, a growing body of literature considers the possibility of AI systems owning the intellectual property that they create. The arguments are typically framed in instrumental terms, with comparisons to juridical persons such as corporations. Implicit in those arguments, or explicit in their illustrations and examples, is the idea that as AI systems approach the point of indistinguishability from humans they should be entitled to a status comparable to natural persons. This article contends that although most legal systems could create a novel category of legal persons, such arguments are insufficient to show that they should.

Sunday, October 25, 2020

The objectivity illusion and voter polarization in the 2016 presidential election

M. C. Schwalbe, G. L. Cohen, L. D. Ross
PNAS Sep 2020, 117 (35) 21218-21229; 

Abstract

Two studies conducted during the 2016 presidential campaign examined the dynamics of the objectivity illusion, the belief that the views of “my side” are objective while the views of the opposing side are the product of bias. In the first, a three-stage longitudinal study spanning the presidential debates, supporters of the two candidates exhibited a large and generally symmetrical tendency to rate supporters of the candidate they personally favored as more influenced by appropriate (i.e., “normative”) considerations, and less influenced by various sources of bias than supporters of the opposing candidate. This study broke new ground by demonstrating that the degree to which partisans displayed the objectivity illusion predicted subsequent bias in their perception of debate performance and polarization in their political attitudes over time, as well as closed-mindedness and antipathy toward political adversaries. These associations, furthermore, remained significant even after controlling for baseline levels of partisanship. A second study conducted 2 d before the election showed similar perceptions of objectivity versus bias in ratings of blog authors favoring the candidate participants personally supported or opposed. These ratings were again associated with polarization and, additionally, with the willingness to characterize supporters of the opposing candidate as evil and likely to commit acts of terrorism. At a time of particular political division and distrust in America, these findings point to the exacerbating role played by the illusion of objectivity.

Significance

Political polarization increasingly threatens democratic institutions. The belief that “my side” sees the world objectively while the “other side” sees it through the lens of its biases contributes to this political polarization and accompanying animus and distrust. This conviction, known as the “objectivity illusion,” was strong and persistent among Trump and Clinton supporters in the weeks before the 2016 presidential election. We show that the objectivity illusion predicts subsequent bias and polarization, including heightened partisanship over the presidential debates. A follow-up study showed that both groups impugned the objectivity of a putative blog author supporting the opposition candidate and saw supporters of that opposing candidate as evil.

Saturday, October 24, 2020

Trump's Strangest Lie: A Plague of Suicides Under His Watch

Gilad Edelman
wired.com
Originally published 23 Oct 2020

IN LAST NIGHT’S presidential debate, Donald Trump repeated one of his more unorthodox reelection pitches. “People are losing their jobs,” he said. “They’re committing suicide. There’s depression, alcohol, drugs at a level that nobody’s ever seen before.”

It’s strange to hear an incumbent president declare, as an argument in his own favor, that a wave of suicides is occurring under his watch. It’s even stranger given that it’s not true. While Trump has been warning since March that any pandemic lockdowns would lead to “suicides by the thousands,” several studies from abroad have found that when governments imposed such restrictions in the early waves of the pandemic, there was no corresponding increase in these deaths. In fact, suicide rates may even have declined. A preprint study released earlier this week found that the suicide rate in Massachusetts didn’t budge even as that state imposed a strong stay-at-home order in March, April, and May.

(cut)

Add this to the list of tragic ironies of the Trump era: The president is using the nonexistent link between lockdowns and suicide to justify an agenda that really could cause more people to take their own lives.