Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, March 9, 2021

How social learning amplifies moral outrage expression in online social networks

Brady, W. J., McLoughlin, K. L., et al.
(2021, January 19).
https://doi.org/10.31234/osf.io/gf7t5

Abstract

Moral outrage shapes fundamental aspects of human social life and is now widespread in online social networks. Here, we show how social learning processes amplify online moral outrage expressions over time. In two pre-registered observational studies of Twitter (7,331 users and 12.7 million total tweets) and two pre-registered behavioral experiments (N = 240), we find that positive social feedback for outrage expressions increases the likelihood of future outrage expressions, consistent with principles of reinforcement learning. We also find that outrage expressions are sensitive to expressive norms in users’ social networks, over and above users’ own preferences, suggesting that norm learning processes guide online outrage expressions. Moreover, expressive norms moderate social reinforcement of outrage: in ideologically extreme networks, where outrage expression is more common, users are less sensitive to social feedback when deciding whether to express outrage. Our findings highlight how platform design interacts with human learning mechanisms to impact moral discourse in digital public spaces.

From the Conclusion

At first blush, documenting the role of reinforcement learning in online outrage expressions may seem trivial. Of course, we should expect that a fundamental principle of human behavior, extensively observed in offline settings, will similarly describe behavior in online settings. However, reinforcement learning of moral behaviors online, combined with the design of social media platforms, may have especially important social implications. Social media newsfeed algorithms can directly impact how much social feedback a given post receives by determining how many other users are exposed to that post. Because we show here that social feedback impacts users’ outrage expressions over time, this suggests newsfeed algorithms can influence users’ moral behaviors by exploiting their natural tendencies for reinforcement learning.  In this way, reinforcement learning on social media differs from reinforcement learning in other environments because crucial inputs to the learning process are shaped by corporate interests. Even if platform designers do not intend to amplify moral outrage, design choices aimed at satisfying other goals --such as profit maximization via user engagement --can indirectly impact moral behavior because outrage-provoking content draws high engagement. Given that moral outrage plays a critical role in collective action and social change, our data suggest that platform designers have the ability to influence the success or failure of social and political movements, as well as informational campaigns designed to influence users’ moral and political attitudes. Future research is required to understand whether users are aware of this, and whether making such knowledge salient can impact their online behavior.


People are more likely to express online "moral outrage" if they have either been rewarded for it in the past or it's common in their own social network.  They are even willing to express far more moral outrage than they genuinely feel in order to fit in.

Monday, March 8, 2021

Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy

Stix, C., Maas, M.M.
AI Ethics (2021). 
https://doi.org/10.1007/s43681-020-00037-w

Abstract

Recent progress in artificial intelligence (AI) raises a wide array of ethical and societal concerns. Accordingly, an appropriate policy approach is urgently needed. While there has been a wave of scholarship in this field, the research community at times appears divided amongst those who emphasize ‘near-term’ concerns and those focusing on ‘long-term’ concerns and corresponding policy measures. In this paper, we seek to examine this alleged ‘gap’, with a view to understanding the practical space for inter-community collaboration on AI policy. We propose to make use of the principle of an ‘incompletely theorized agreement’ to bridge some underlying disagreements, in the name of important cooperation on addressing AI’s urgent challenges. We propose that on certain issue areas, scholars working with near-term and long-term perspectives can converge and cooperate on selected mutually beneficial AI policy projects, while maintaining their distinct perspectives.

From the Conclusion

AI has raised multiple societal and ethical concerns. This highlights the urgent need for suitable and impactful policy measures in response. Nonetheless, there is at present an experienced fragmentation in the responsible AI policy community, amongst clusters of scholars focusing on ‘near-term’ AI risks, and those focusing on ‘longer-term’ risks. This paper has sought to map the practical space for inter-community collaboration, with a view towards the practical development of AI policy.

As such, we briefly provided a rationale for such collaboration, by reviewing historical cases of scientific community conflict or collaboration, as well as the contemporary challenges facing AI policy. We argued that fragmentation within a given community can hinder progress on key and urgent policies. Consequently, we reviewed a number of potential (epistemic, normative or pragmatic) sources of disagreement in the AI ethics community, and argued that these trade-offs are often exaggerated, and at any rate do not need to preclude collaboration. On this basis, we presented the novel proposal for drawing on the constitutional law principle of an ‘incompletely theorized agreement’, for the communities to set aside or suspend these and other disagreements for the purpose of achieving higher-order AI policy goals of both communities in selected areas. We, therefore, non-exhaustively discussed a number of promising shared AI policy areas which could serve as the sites for such agreements, while also discussing some of the overall limits of this framework.

Sunday, March 7, 2021

Why do inequality and deprivation produce high crime and low trust?


De Courson, B., Nettle, D. 
Sci Rep 11, 1937 (2021). 
https://doi.org/10.1038/s41598-020-80897-8

Abstract

Humans sometimes cooperate to mutual advantage, and sometimes exploit one another. In industrialised societies, the prevalence of exploitation, in the form of crime, is related to the distribution of economic resources: more unequal societies tend to have higher crime, as well as lower social trust. We created a model of cooperation and exploitation to explore why this should be. Distinctively, our model features a desperation threshold, a level of resources below which it is extremely damaging to fall. Agents do not belong to fixed types, but condition their behaviour on their current resource level and the behaviour in the population around them. We show that the optimal action for individuals who are close to the desperation threshold is to exploit others. This remains true even in the presence of severe and probable punishment for exploitation, since successful exploitation is the quickest route out of desperation, whereas being punished does not make already desperate states much worse. Simulated populations with a sufficiently unequal distribution of resources rapidly evolve an equilibrium of low trust and zero cooperation: desperate individuals try to exploit, and non-desperate individuals avoid interaction altogether. Making the distribution of resources more equal or increasing social mobility is generally effective in producing a high cooperation, high trust equilibrium; increasing punishment severity is not.

From the Discussion

Within criminology, our prediction of risky exploitative behaviour when in danger of falling below a threshold of desperation is reminiscent of Merton’s strain theory of deviance. Under this theory, deviance results when individuals have a goal (remaining constantly above the threshold of participation in society), but the available legitimate means are insufficient to get them there (neither foraging alone nor cooperation has a large enough one-time payoff). They thus turn to risky alternatives, despite the drawbacks of these (see also Ref.32 for similar arguments). This explanation is not reducible to desperation making individuals discount the future more steeply, which is often invoked as an explanation for criminality. Agents in our model do not face choices between smaller-sooner and larger-later rewards; the payoff for exploitation is immediate, whether successful or unsuccessful. Also note the philosophical differences between our approach and ‘self-control’ styles of explanation. Those approaches see offending as deficient decision-making: it would be in people’s interests not to offend, but some can’t manage it (see Ref.35 for a critical review). Like economic and behavioural-ecological theories of crime more generally, ours assumes instead that there are certain situations or states where offending is the best of a bad set of available options.

Saturday, March 6, 2021

Robots, Ethics, and Intimacy: The Need for Scientific Research

Borenstein J., Arkin R. 
(2019) Philosophical Studies Series, 
vol 134. Springer, Cham. 

Abstract

Intimate relationships between robots and human beings may begin to form in the near future. Market forces, customer demand, and other factors may drive the creation of various forms of robots to which humans may form strong emotional attachments. Yet prior to the technology becoming fully actualized, numerous ethical, legal, and social issues must be addressed. This could be accomplished in part by establishing a rigorous scientific research agenda in the realm of intimate robotics, the aim of which would be to explore what effects the technology may have on users and on society more generally. Our goal is not to resolve whether the development of intimate robots is ethically appropriate. Rather, we contend that if such robots are going to be designed, then an obligation emerges to prevent harm that the technology could cause.

Friday, March 5, 2021

Free to blame? Belief in free will is related to victim blaming

Genschow, O., & Vehlow, B.
Consciousness and Cognition
Volume 88, February 2021, 103074

Abstract

The more people believe in free will, the harsher their punishment of criminal offenders. A reason for this finding is that belief in free will leads individuals to perceive others as responsible for their behavior. While research supporting this notion has mainly focused on criminal offenders, the perspective of the victims has been neglected so far. We filled this gap and hypothesized that individuals’ belief in free will is positively correlated with victim blaming—the tendency to make victims responsible for their bad luck. In three studies, we found that the more individuals believe in free will, the more they blame victims. Study 3 revealed that belief in free will is correlated with victim blaming even when controlling for just world beliefs, religious worldviews, and political ideology. The results contribute to a more differentiated view of the role of free will beliefs and attributed intentions.

Highlights

• Past research indicated that belief in free will increases the perception of criminal offenders.

• However, this research ignored the perception of the victims.

• We filled this gap by conducting three studies.

• All studies find that belief in free will correlates with the tendency to blame victims.

From the Discussion

In the last couple of decades, claims that free will is nothing more than an illusion have become prevalent in the popular press (e.g., Chivers 2010; Griffin, 2016; Wolfe, 1997).  Based on such claims, scholars across disciplines started debating potential societal consequences for the case that people would start disbelieving in free will. For example, some philosophers argued that disbelief in free will would have catastrophic consequences, because people would no longer try to control their behavior and start acting immorally (e.g., Smilansky, 2000, 2002). Likewise, psychological research has mainly focused on the
downsides of disbelief in free will. For example, weakening free will belief led participants to behave less morally and responsibly (Baumeister et al., 2009; Protzko et al., 2016; Vohs & Schooler, 2008). In contrast to these results, our findings illustrate a more positive side of disbelief in free will, as higher levels of disbelief in free will would reduce victim blaming. 

Thursday, March 4, 2021

‘Pastorally dangerous’: U.S. bishops risk causing confusion about vaccines, ethicists say

Michael J. O’Loughlin
America Magazine
Originally published March 02, 2021

Here is an excerpt:

Anthony Egan, S.J., a Jesuit priest and lecturer in theology in South Africa, said church leaders publishing messages about hypothetical situations during a crisis is “unhelpful” as Catholics navigate life in a pandemic.

“I think it’s pastorally dangerous because people are dealing with all kinds of crises—people are faced with unemployment, people are faced with disease, people are faced with death—and to make this kind of statement just adds to the general feeling of unease, a general feeling of crisis,” Father Egan said, noting that in South Africa, which has been hard hit by a more aggressive variant, the Johnson & Johnson vaccine is the only available option. “I don’t think that’s pastorally helpful.”

The choice about taking a vaccine like Johnson & Johnson’s must come down to individual conscience, he said. “I think it’s irresponsible to make a claim that you must absolutely not or absolutely must take the drug,” he said.

Ms. Fullam agreed, saying modern life is filled with difficult dilemmas stemming from previous injustices and “one of the great things about the Catholic moral tradition is that we recognize the world is a messy place, but we don’t insist Catholics stay away from that messiness.” Catholics, she said, are called “to think about how to make the situation better” rather than retreat in the face of complexity and given the ongoing pandemic, receiving a vaccine with a remote connection to abortion could be the right decision—especially in communities where access to vaccines might be difficult.

Wednesday, March 3, 2021

Evolutionary biology meets consciousness: essay review

Browning, H., Veit, W. 
Biol Philos 36, 5 (2021). 
https://doi.org/10.1007/s10539-021-09781-7

Abstract

In this essay, we discuss Simona Ginsburg and Eva Jablonka’s The Evolution of the Sensitive Soul from an interdisciplinary perspective. Constituting perhaps the longest treatise on the evolution of consciousness, Ginsburg and Jablonka unite their expertise in neuroscience and biology to develop a beautifully Darwinian account of the dawning of subjective experience. Though it would be impossible to cover all its content in a short book review, here we provide a critical evaluation of their two key ideas—the role of Unlimited Associative Learning in the evolution of, and detection of, consciousness and a metaphysical claim about consciousness as a mode of being—in a manner that will hopefully overcome some of the initial resistance of potential readers to tackle a book of this length.

Here is one portion:

Modes of being

The second novel idea within their book is to conceive of consciousness as a new mode of being, rather than a mere trait. This part of their argument may appear unusual to many operating in the debate, not the least because this formulation—not unlike their choice to include Aristotle’s sensitive soul in the title—evokes a sense of outdated and strange metaphysics. We share some of this opposition to this vocabulary, but think it best conceived as a metaphor.

They begin their book by introducing the idea of teleological (goal-directed) systems and the three ‘modes of being’, taken from the works of Aristotle, each of which is considered to have a unique telos (goal). These are: life (survival/reproduction), sentience (value ascription to stimuli), and rationality (value ascription to concepts). The focus of this book is the second of these—the “sensitive soul”. Rather than a trait, such as vision, G&J see consciousness as a mode of being, in the same way as the emergence of life and rational thought also constitute new modes of being.

In several places throughout their book, G&J motivate their account through this analogy, i.e. by drawing a parallel from consciousness to life and/or rationality. Neither, they think, can be captured in a simple definition or trait, thus explaining the lack of progress on trying to come up with definitions for these phenomena. Compare their discussion of the distinction between life and non-life. Life, they argue, is not a functional trait that organisms possess, but rather a new way of being that opens up new possibilities; so too with consciousness. It is a new form of biological organization at a level above the organism that gives rise to a “new type of goal-directed system”, one which faces a unique set of challenges and opportunities. They identify three such transitions—the transition from non-life to life (the “nutritive soul”), the transition from non-conscious to conscious (the “sensitive soul”) and the transition from non-rational to rational (the “rational soul”). All three transitions mark a change to a new form of being, one in which the types of goals change. But while this is certainly correct in the sense of constituting a radical transformation in the kinds of goal-directed systems there are, we have qualms with the idea that this formal equivalence or abstract similarity can be used to ground more concrete properties. Yet G&J use this analogy to motivate their UAL account in parallel to unlimited heredity as a transition marker of life.

Tuesday, March 2, 2021

Surprise: 56% of US Catholics Favor Legalized Abortion

Dalia Fahmy
Pew Research Center
Originally posted 20 Oct 20

Here are two excerpts:

1. More than half of U.S. Catholics (56%) said abortion should be legal in all or most cases, while roughly four-in-ten (42%) said it should be illegal in all or most cases, according to the 2019 Pew Research Center survey. Although most Catholics generally approve of legalized abortion, the vast majority favor at least some restrictions. For example, while roughly one-third of Catholics (35%) said abortion should be legal in most cases, only around one-fifth (21%) said it should be legal in all cases. By the same token, 28% of Catholics said abortion should be illegal in most cases, while half as many (14%) said it should be illegal in all cases.

Compared with other Christian groups analyzed in the data, Catholics were about as likely as White Protestants who are not evangelical (60%) and Black Protestants (64%) to support legal abortion, and much more likely than White evangelical Protestants (20%) to do so. Among Americans who are religiously unaffiliated – those who say they are atheist, agnostic or “nothing in particular” – the vast majority (83%) said abortion should be legal in all or most cases.

(cut)

6. Even though most Catholics said abortion should generally be legal, a majority also said abortion is morally wrong. In fact, the share who said that abortion is morally wrong (57%), according to data from a 2017 survey, and the share who said it should be legal (56%) are almost identical. Among adults in other religious groups, there was a wide range of opinions on this question: Almost two-thirds of Protestants (64%) said abortion is morally wrong, including 77% of those who identify with evangelical Protestant denominations. Among the religiously unaffiliated, the vast majority said abortion is morally acceptable (34%) or not a moral issue (42%).

Monday, March 1, 2021

Morality justifies motivated reasoning in the folk ethics of belief

Corey Cusimano & Tania Lombrozo
Cognition
19 January 2021

Abstract

When faced with a dilemma between believing what is supported by an impartial assessment of the evidence (e.g., that one's friend is guilty of a crime) and believing what would better fulfill a moral obligation (e.g., that the friend is innocent), people often believe in line with the latter. But is this how people think beliefs ought to be formed? We addressed this question across three studies and found that, across a diverse set of everyday situations, people treat moral considerations as legitimate grounds for believing propositions that are unsupported by objective, evidence-based reasoning. We further document two ways in which moral considerations affect how people evaluate others' beliefs. First, the moral value of a belief affects the evidential threshold required to believe, such that morally beneficial beliefs demand less evidence than morally risky beliefs. Second, people sometimes treat the moral value of a belief as an independent justification for belief, and on that basis, sometimes prescribe evidentially poor beliefs to others. Together these results show that, in the folk ethics of belief, morality can justify and demand motivated reasoning.

From the General Discussion

5.2. Implications for motivated reasoning

Psychologists have long speculated that commonplace deviations from rational judgments and decisions could reflect commitments to different normative standards for decision making rather than merely cognitive limitations or unintentional errors (Cohen, 1981; Koehler, 1996; Tribe, 1971). This speculation has been largely confirmed in the domain of decision making, where work has documented that people will refuse to make certain decisions because of a normative commitment to not rely on certain kinds of evidence (Nesson, 1985; Wells, 1992), or because of a normative commitment to prioritize deontological concerns over utility-maximizing concerns (Baron & Spranca, 1997; Tetlock et al., 2000). And yet, there has been comparatively little investigation in the domain of belief formation. While some work has suggested that people evaluate beliefs in ways that favor non-objective, or non-evidential criteria (e.g., Armor et al., 2008; Cao et al., 2019; Metz, Weisberg, & Weisberg, 2018; Tenney et al., 2015), this work has failed to demonstrate that people prescribe beliefs that violate what objective, evidence-based reasoning would warrant. To our knowledge, our results are the first to demonstrate that people will knowingly endorse non-evidential norms for belief, and specifically, prescribe motivated reasoning to others.

(cut)

Our findings suggest more proximate explanations for these biases: That lay people see these beliefs as morally beneficial and treat these moral benefits as legitimate grounds for motivated reasoning. Thus, overconfidence or over-optimism may persist in communities because people hold others to lower standards of evidence for adopting morally-beneficial optimistic beliefs than they do for pessimistic beliefs, or otherwise treat these benefits as legitimate reasons to ignore the evidence that one has.