Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, March 15, 2021

What is 'purity'? Conceptual murkiness in moral psychology.

Gray, K., DiMaggio, N., Schein, C., 
& Kachanoff, F. (2021, February 3).
https://doi.org/10.31234/osf.io/vfyut

Abstract

Purity is an important topic in psychology. It has a long history in moral discourse, has helped catalyze paradigm shifts in moral psychology, and is thought to underlie political differences. But what exactly is “purity?” To answer this question, we review the history of purity and then systematically examine 158 psychology papers that define and operationalization (im)purity. In contrast to the many concepts defined by what they are, purity is often understood by what it isn’t—obvious dyadic harm. Because of this “contra”-harm understanding, definitions and operationalizations of purity are quite varied. Acts used to operationalize impurity include taking drugs, eating your sister’s scab, vandalizing a church, wearing unmatched clothes, buying music with sexually explicit lyrics, and having a messy house. This heterogeneity makes purity a “chimera”—an entity composed of various distinct elements. Our review reveals that the “contra-chimera” of purity has 9 different scientific understandings, and that most papers define purity differently from how they operationalize it. Although people clearly moralize diverse concerns—including those related to religion, sex, and food—such heterogeneity in conceptual definitions is problematic for theory development. Shifting definitions of purity provide “theoretical degrees of freedom” that make falsification extremely difficult. Doubts about the coherence and consistency of purity raise questions about key purity-related claims of modern moral psychology, including the nature of political differences and the cognitive foundations of moral judgment.

Sunday, March 14, 2021

The ”true me”—one or many?

Berent, I., & Platt, M. (2019, December 9). 
https://doi.org/10.31234/osf.io/tkur5

Abstract

Recent results suggest that people hold a notion of the true self, distinct from the self. Here, we seek to further elucidate the “true me”—whether it is good or bad, material or immaterial. Critically, we ask whether the true self is unitary. To address these questions, we invited participants to reason about John—a character who simultaneously exhibits both positive and negative moral behaviors. John’s character was gauged via two tests--a brain scan and a behavioral test, whose results invariably diverged (i.e., one test indicated that John’s moral core is positive and another negative). Participants assessed John’s true self along two questions: (a) Did John commit his acts (positive and negative) freely? and (b) What is John’s essence really? Responses to the two questions diverged. When asked to  evaluate John’s moral core explicitly (by reasoning about his free will), people invariably descried John’s true self as good. But when John’s moral core was assessed implicitly (by considering his essence), people sided with the outcomes of the brain test. These results demonstrate that people hold conflicting notions of the true self. We formally support this proposal by presenting a grammar of the true self, couched within Optimality Theory. We
show that the constraint ranking necessary to capture explicit and implicit view of the true self are distinct. Our intuitive belief in a true unitary “me” is thus illusory.

From the Conclusion

When we consider a person’s moral core explicitly (by evaluating which acts they commit freely), we consider them as having a single underlying moral valence (rather multiple competing attributes), and that moral core is decidedly good. Thus, our explicit notion of true moral self is good and unitary, a proposal that is supported by previous findings (e.g., De Freitas & Cikara, 2018; Molouki & Bartels, 2017; Newman et al., 2014b; Tobia, 2016).  But when we consider the person’s moral fiber implicitly, we evaluate their essence--a notion that is devoid of specific moral valence (good or bad), but is intimately linked to their material body. This material view of essence is in line with previous results, suggesting that children (Gelman, 2003; Gelman & Wellman, 1991) and infants (Setoh et al., 2013) believe that living things must have “insides”, and that their essence corresponds to a piece of matter (Springer & Keil, 1991) that is localized at the center of the body (Newman & Keil, 2008). Further support for this material notion of essence is presented by people’s tendency to conclude that psychological traits that are localized in the brain are more likely to be innate (Berent et al., 2019; Berent et al., 2019, September 10). The persistent link between John’s essence and the outcomes of the brain probe in also in line with this proposal. 

Saturday, March 13, 2021

The Dynamics of Motivated Beliefs

Zimmermann, Florian. 2020.
American Economic Review, 110 (2): 337-61.

Abstract
A key question in the literature on motivated reasoning and self-deception is how motivated beliefs are sustained in the presence of feedback. In this paper, we explore dynamic motivated belief patterns after feedback. We establish that positive feedback has a persistent effect on beliefs. Negative feedback, instead, influences beliefs in the short run, but this effect fades over time. We investigate the mechanisms of this dynamic pattern, and provide evidence for an asymmetry in the recall of feedback. Finally, we establish that, in line with theoretical accounts, incentives for belief accuracy mitigate the role of motivated reasoning.

From the Discussion

In light of the finding that negative feedback has only limited effects on beliefs in the long run, the question arises as to whether people should become entirely delusional about themselves over time. Note that results from the incentive treatments highlight that incentives for recall accuracy bound the degree of self-deception and thereby possibly prevent motivated agents from becoming entirely delusional. Further note that there exists another rather mechanical counterforce, which is that the perception of feedback likely changes as people become more confident. In terms of the experiment, if a subject believes that the chances of ranking in the upper half are mediocre, then that subject will likely perceive two comparisons out of three as positive feedback. If, instead, the same subject is almost certain they rank in the upper half, then that subject will likely perceive the same feedback as rather negative. Note that this “perception effect” is reflected in the Bayesian definition of feedback that we report as a robustness check in the Appendix of the paper. An immediate consequence of this change in perception is that the more confident an agent becomes, the more likely it is that they will obtain negative feedback. Unless an agent does not incorporate negative feedback at all, this should act as a force that bounds people’s delusions.

Friday, March 12, 2021

The Psychology of Existential Risk: Moral Judgments about Human Extinction

Schubert, S., Caviola, L. & Faber, N.S. 
Sci Rep 9, 15100 (2019). 
https://doi.org/10.1038/s41598-019-50145-9

Abstract

The 21st century will likely see growing risks of human extinction, but currently, relatively small resources are invested in reducing such existential risks. Using three samples (UK general public, US general public, and UK students; total N = 2,507), we study how laypeople reason about human extinction. We find that people think that human extinction needs to be prevented. Strikingly, however, they do not think that an extinction catastrophe would be uniquely bad relative to near-extinction catastrophes, which allow for recovery. More people find extinction uniquely bad when (a) asked to consider the extinction of an animal species rather than humans, (b) asked to consider a case where human extinction is associated with less direct harm, and (c) they are explicitly prompted to consider long-term consequences of the catastrophes. We conclude that an important reason why people do not find extinction uniquely bad is that they focus on the immediate death and suffering that the catastrophes cause for fellow humans, rather than on the long-term consequences. Finally, we find that (d) laypeople—in line with prominent philosophical arguments—think that the quality of the future is relevant: they do find extinction uniquely bad when this means forgoing a utopian future.

Discussion

Our studies show that people find that human extinction is bad, and that it is important to prevent it. However, when presented with a scenario involving no catastrophe, a near-extinction catastrophe and an extinction catastrophe as possible outcomes, they do not see human extinction as uniquely bad compared with non-extinction. We find that this is partly because people feel strongly for the victims of the catastrophes, and therefore focus on the immediate consequences of the catastrophes. The immediate consequences of near-extinction are not that different from those of extinction, so this naturally leads them to find near-extinction almost as bad as extinction. Another reason is that they neglect the long-term consequences of the outcomes. Lastly, their empirical beliefs about the quality of the future make a difference: telling them that the future will be extraordinarily good makes more people find extinction uniquely bad.

Thus, when asked in the most straightforward and unqualified way, participants do not find human extinction uniquely bad. 

Thursday, March 11, 2021

Decision making can be improved through observational learning

Yoon, H., Scopelliti, I. & Morewedge, C.
Organizational Behavior and 
Human Decision Processes
Volume 162, January 2021, 
Pages 155-188

Abstract

Observational learning can debias judgment and decision making. One-shot observational learning-based training interventions (akin to “hot seating”) can produce reductions in cognitive biases in the laboratory (i.e., anchoring, representativeness, and social projection), and successfully teach a decision rule that increases advice taking in a weight on advice paradigm (i.e., the averaging principle). These interventions improve judgment, rule learning, and advice taking more than practice. We find observational learning-based interventions can be as effective as information-based interventions. Their effects are additive for advice taking, and for accuracy when advice is algorithmically optimized. As found in the organizational learning literature, explicit knowledge transferred through information appears to reduce the stickiness of tacit knowledge transferred through observational learning. Moreover, observational learning appears to be a unique debiasing training strategy, an addition to the four proposed by Fischhoff (1982). We also report new scales measuring individual differences in anchoring, representativeness heuristics, and social projection.

Highlights

• Observational learning training interventions improved judgment and decision making.

• OL interventions reduced anchoring bias, representativeness, and social projection.

• Observational learning training interventions increased advice taking.

• Observational learning and information complementarily taught a decision rule.

• We provide new bias scales for anchoring, representativeness, and social projection.

Wednesday, March 10, 2021

Thought-detection: AI has infiltrated our last bastion of privacy

Gary Grossman
VentureBeat
Originally posted 13 Feb 21

Our thoughts are private – or at least they were. New breakthroughs in neuroscience and artificial intelligence are changing that assumption, while at the same time inviting new questions around ethics, privacy, and the horizons of brain/computer interaction.

Research published last week from Queen Mary University in London describes an application of a deep neural network that can determine a person’s emotional state by analyzing wireless signals that are used like radar. In this research, participants in the study watched a video while radio signals were sent towards them and measured when they bounced back. Analysis of body movements revealed “hidden” information about an individual’s heart and breathing rates. From these findings, the algorithm can determine one of four basic emotion types: anger, sadness, joy, and pleasure. The researchers proposed this work could help with the management of health and wellbeing and be used to perform tasks like detecting depressive states.

Ahsan Noor Khan, a PhD student and first author of the study, said: “We’re now looking to investigate how we could use low-cost existing systems, such as Wi-Fi routers, to detect emotions of a large number of people gathered, for instance in an office or work environment.” Among other things, this could be useful for HR departments to assess how new policies introduced in a meeting are being received, regardless of what the recipients might say. Outside of an office, police could use this technology to look for emotional changes in a crowd that might lead to violence.

The research team plans to examine public acceptance and ethical concerns around the use of this technology. Such concerns would not be surprising and conjure up a very Orwellian idea of the ‘thought police’ from 1984. In this novel, the thought police watchers are expert at reading people’s faces to ferret out beliefs unsanctioned by the state, though they never mastered learning exactly what a person was thinking.


Tuesday, March 9, 2021

How social learning amplifies moral outrage expression in online social networks

Brady, W. J., McLoughlin, K. L., et al.
(2021, January 19).
https://doi.org/10.31234/osf.io/gf7t5

Abstract

Moral outrage shapes fundamental aspects of human social life and is now widespread in online social networks. Here, we show how social learning processes amplify online moral outrage expressions over time. In two pre-registered observational studies of Twitter (7,331 users and 12.7 million total tweets) and two pre-registered behavioral experiments (N = 240), we find that positive social feedback for outrage expressions increases the likelihood of future outrage expressions, consistent with principles of reinforcement learning. We also find that outrage expressions are sensitive to expressive norms in users’ social networks, over and above users’ own preferences, suggesting that norm learning processes guide online outrage expressions. Moreover, expressive norms moderate social reinforcement of outrage: in ideologically extreme networks, where outrage expression is more common, users are less sensitive to social feedback when deciding whether to express outrage. Our findings highlight how platform design interacts with human learning mechanisms to impact moral discourse in digital public spaces.

From the Conclusion

At first blush, documenting the role of reinforcement learning in online outrage expressions may seem trivial. Of course, we should expect that a fundamental principle of human behavior, extensively observed in offline settings, will similarly describe behavior in online settings. However, reinforcement learning of moral behaviors online, combined with the design of social media platforms, may have especially important social implications. Social media newsfeed algorithms can directly impact how much social feedback a given post receives by determining how many other users are exposed to that post. Because we show here that social feedback impacts users’ outrage expressions over time, this suggests newsfeed algorithms can influence users’ moral behaviors by exploiting their natural tendencies for reinforcement learning.  In this way, reinforcement learning on social media differs from reinforcement learning in other environments because crucial inputs to the learning process are shaped by corporate interests. Even if platform designers do not intend to amplify moral outrage, design choices aimed at satisfying other goals --such as profit maximization via user engagement --can indirectly impact moral behavior because outrage-provoking content draws high engagement. Given that moral outrage plays a critical role in collective action and social change, our data suggest that platform designers have the ability to influence the success or failure of social and political movements, as well as informational campaigns designed to influence users’ moral and political attitudes. Future research is required to understand whether users are aware of this, and whether making such knowledge salient can impact their online behavior.


People are more likely to express online "moral outrage" if they have either been rewarded for it in the past or it's common in their own social network.  They are even willing to express far more moral outrage than they genuinely feel in order to fit in.

Monday, March 8, 2021

Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy

Stix, C., Maas, M.M.
AI Ethics (2021). 
https://doi.org/10.1007/s43681-020-00037-w

Abstract

Recent progress in artificial intelligence (AI) raises a wide array of ethical and societal concerns. Accordingly, an appropriate policy approach is urgently needed. While there has been a wave of scholarship in this field, the research community at times appears divided amongst those who emphasize ‘near-term’ concerns and those focusing on ‘long-term’ concerns and corresponding policy measures. In this paper, we seek to examine this alleged ‘gap’, with a view to understanding the practical space for inter-community collaboration on AI policy. We propose to make use of the principle of an ‘incompletely theorized agreement’ to bridge some underlying disagreements, in the name of important cooperation on addressing AI’s urgent challenges. We propose that on certain issue areas, scholars working with near-term and long-term perspectives can converge and cooperate on selected mutually beneficial AI policy projects, while maintaining their distinct perspectives.

From the Conclusion

AI has raised multiple societal and ethical concerns. This highlights the urgent need for suitable and impactful policy measures in response. Nonetheless, there is at present an experienced fragmentation in the responsible AI policy community, amongst clusters of scholars focusing on ‘near-term’ AI risks, and those focusing on ‘longer-term’ risks. This paper has sought to map the practical space for inter-community collaboration, with a view towards the practical development of AI policy.

As such, we briefly provided a rationale for such collaboration, by reviewing historical cases of scientific community conflict or collaboration, as well as the contemporary challenges facing AI policy. We argued that fragmentation within a given community can hinder progress on key and urgent policies. Consequently, we reviewed a number of potential (epistemic, normative or pragmatic) sources of disagreement in the AI ethics community, and argued that these trade-offs are often exaggerated, and at any rate do not need to preclude collaboration. On this basis, we presented the novel proposal for drawing on the constitutional law principle of an ‘incompletely theorized agreement’, for the communities to set aside or suspend these and other disagreements for the purpose of achieving higher-order AI policy goals of both communities in selected areas. We, therefore, non-exhaustively discussed a number of promising shared AI policy areas which could serve as the sites for such agreements, while also discussing some of the overall limits of this framework.

Sunday, March 7, 2021

Why do inequality and deprivation produce high crime and low trust?


De Courson, B., Nettle, D. 
Sci Rep 11, 1937 (2021). 
https://doi.org/10.1038/s41598-020-80897-8

Abstract

Humans sometimes cooperate to mutual advantage, and sometimes exploit one another. In industrialised societies, the prevalence of exploitation, in the form of crime, is related to the distribution of economic resources: more unequal societies tend to have higher crime, as well as lower social trust. We created a model of cooperation and exploitation to explore why this should be. Distinctively, our model features a desperation threshold, a level of resources below which it is extremely damaging to fall. Agents do not belong to fixed types, but condition their behaviour on their current resource level and the behaviour in the population around them. We show that the optimal action for individuals who are close to the desperation threshold is to exploit others. This remains true even in the presence of severe and probable punishment for exploitation, since successful exploitation is the quickest route out of desperation, whereas being punished does not make already desperate states much worse. Simulated populations with a sufficiently unequal distribution of resources rapidly evolve an equilibrium of low trust and zero cooperation: desperate individuals try to exploit, and non-desperate individuals avoid interaction altogether. Making the distribution of resources more equal or increasing social mobility is generally effective in producing a high cooperation, high trust equilibrium; increasing punishment severity is not.

From the Discussion

Within criminology, our prediction of risky exploitative behaviour when in danger of falling below a threshold of desperation is reminiscent of Merton’s strain theory of deviance. Under this theory, deviance results when individuals have a goal (remaining constantly above the threshold of participation in society), but the available legitimate means are insufficient to get them there (neither foraging alone nor cooperation has a large enough one-time payoff). They thus turn to risky alternatives, despite the drawbacks of these (see also Ref.32 for similar arguments). This explanation is not reducible to desperation making individuals discount the future more steeply, which is often invoked as an explanation for criminality. Agents in our model do not face choices between smaller-sooner and larger-later rewards; the payoff for exploitation is immediate, whether successful or unsuccessful. Also note the philosophical differences between our approach and ‘self-control’ styles of explanation. Those approaches see offending as deficient decision-making: it would be in people’s interests not to offend, but some can’t manage it (see Ref.35 for a critical review). Like economic and behavioural-ecological theories of crime more generally, ours assumes instead that there are certain situations or states where offending is the best of a bad set of available options.