Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, April 15, 2022

Strategic identity signaling in heterogeneous networks

T. Van der dos, M. Galesic, et al.
PNAS, 2022.
119 (10) e2117898119

Abstract

Individuals often signal identity information to facilitate assortment with partners who are likely to share norms, values, and goals. However, individuals may also be incentivized to encrypt their identity signals to avoid detection by dissimilar receivers, particularly when such detection is costly. Using mathematical modeling, this idea has previously been formalized into a theory of covert signaling. In this paper, we provide an empirical test of the theory of covert signaling in the context of political identity signaling surrounding the 2020 US presidential elections. To identify likely covert and overt signals on Twitter, we use methods relying on differences in detection between ingroup and outgroup receivers. We strengthen our experimental predictions with additional mathematical modeling and examine the usage of selected covert and overt tweets in a behavioral experiment. We find that participants strategically adjust their signaling behavior in response to the political constitution of their audiences. These results support our predictions and point to opportunities for further theoretical development. Our findings have implications for our understanding of political communication, social identity, pragmatics, hate speech, and the maintenance of cooperation in diverse populations.

Significance

Much of online conversation today consists of signaling one’s political identity. Although many signals are obvious to everyone, others are covert, recognizable to one’s ingroup while obscured from the outgroup. This type of covert identity signaling is critical for collaborations in a diverse society, but measuring covert signals has been difficult, slowing down theoretical development. We develop a method to detect covert and overt signals in tweets posted before the 2020 US presidential election and use a behavioral experiment to test predictions of a mathematical theory of covert signaling. Our results show that covert political signaling is more common when the perceived audience is politically diverse and open doors to a better understanding of communication in politically polarized societies.

From the Discussion

The theory predicts that individuals should use more covert signaling in more heterogeneous groups or when they are in the minority. We found support for this prediction in the ways people shared political speech in a behavioral experiment. We observed the highest levels of covert signaling when audiences consisted almost entirely of cross-partisans, supporting the notion that covert signaling is a strategy for avoiding detection by hostile outgroup members. Of note, we selected tweets for our study at a time of heightened partisan divisions: the four weeks preceding the 2020 US presidential election. Consequently, these tweets mostly discussed the opposing political party. This focus was reflected in our behavioral experiment, in which we did not observe an effect of audience composition when all members were (more or less extreme) copartisans. In that societal context, participants might have perceived the cost of dislikes to be minimal and have likely focused on partisan disputes in their real-life conversations happening around that time. Future work testing the theory of covert signaling should also examine signaling strategies in copartisan conversations during times of salient intragroup political divisions.


Editor's Note: Wondering if this research generalizes into other covert forms of communication during psychotherapy.

Thursday, April 14, 2022

AI won’t steal your job, just make it meaningless

John Danaher
iainews.com
Originally published 18 MAR 22

New technologies are often said to be in danger of making humans redundant, replacing them with robots and AI, and making work disappear altogether. A crisis of identity and purpose might result from that, but Silicon Valley tycoons assure us that a universal basic income could at least take care of people’s material needs, leaving them with plenty of leisure time in which to forge new identities and find new sources of purpose.

This, however, paints an overly simplistic picture. What seems more likely to happen is that new technologies will not make humans redundant at a mass scale, but will change the nature of work, making it worse for many, and sapping the elements that give it some meaning and purpose. It’s the worst of both worlds: we’ll continue working, but our jobs will become increasingly meaningless. 

History has some lessons to teach us here. Technology has had a profound effect on work in the past, not just on the ways in which we carry out our day-to-day labour, but also on how we understand its value. Consider the humble plough. In its most basic form, it is a hand-operated tool, consisting of little more than a pointed stick that scratches a furrow through the soil. This helps a farmer to sow seeds but does little else. Starting in the middle ages, however, more complex, ‘heavy’ ploughs began to be used by farmers in Northern Europe. These heavy ploughs rotated and turned the earth, bringing nutrient rich soils to the surface, and radically altering the productivity of farming. Farming ceased being solely about subsistence. It started to be about generating wealth.

The argument about how the heavy plough transformed the nature of work was advanced by historian Lynn White Jr in his classic study Medieval Technology and Social Change. Writing in the idiom of the early 1960s, he argued that “No more fundamental change in the idea of man’s relation to the soil can be imagined: once man had been part of nature; now he became her exploiter.”

It is easy to trace a line – albeit one that takes a detour through Renaissance mercantilism and the Industrial revolution – from the development of the heavy plough to our modern conception of work. Although work is still an economic necessity for many people, it is not just that. It is something more. We don’t just do it to survive; we do it to thrive. Through our work we can buy into a certain lifestyle and affirm a certain identity. We can develop mastery and cultivate self-esteem; we make a contribution to our societies and a name for ourselves. 

Wednesday, April 13, 2022

Moralization of rationality can stimulate, but intellectual humility inhibits, sharing of hostile conspiratorial rumors.

Marie, A., & Petersen, M. (2022, March 4). 
https://doi.org/10.31219/osf.io/k7u68

Abstract

Many assume that if citizens become more inclined to moralize the values of evidence-based and logical thinking, political hostility and conspiracy theories would be less widespread.  Across two large surveys (N = 3675) run in the U.S.A. of 2021 (one exploratory and one preregistered), we provide the first demonstration that moralization of rationality can actually stimulate the spread of conspiratorial and hostile news. This reflects the fact that the moralization of rationality can be highly interrelated with status seeking, corroborating arguments that self-enhancing strategies often advance hidden behind claims to objectivity and morality. In contrast to moral grandstanding on the issue of rationality, our studies find robust evidence that intellectual humility may immunize people from sharing and believing hostile  conspiratorial news (i.e. the awareness that intuitions are fallible, and that suspending critique is often desirable). All associations generalized to hostile conspiratorial news both “fake” and anchored in real events.

General Discussion

Many observers assume that citizens more morally sensitized to the values of evidence-based and methodic thinking would be better protected from the perils of political polarization, conspiracy theories, and “fake news.”Yet, attention to the discourse of individuals who pass along politically hostile and conspiratorial claims suggests that they often sincerely believe to be free and independent “critical thinkers”, and to care more about“facts” than the “unthinking sheep” to which they assimilate most of the population (Harambam & Aupers, 2017).

Across two  large online  surveys (N  = 3675) conducted in  the  context  of the  highly polarized  U.S.A. of  2021, we provide the first piece of evidence that moralizing  epistemic rationality—a motivation   for   rationality defined in the abstract—may stimulate the dissemination of hostile conspiratorial views. Specifically, respondents who reported viewing the grounding of one’s beliefs in evidence and logic as amoral virtue(StÃ¥hl et al., 2016) were more  likely to share hostile conspiratorial news to their political  opponents on social  media than individuals low on  this trait.  Importantly, the effect generalized to two types of news stories overtly targeting the  participant’s outgroup: (false) news making entirely fabricated.

Tuesday, April 12, 2022

The Affective Harm Account (AHA) of Moral Judgment: Reconciling Cognition and Affect, Dyadic Morality and Disgust, Harm and Purity

Kurt Gray, Jennifer K. MacCormack, et al.
In Press (2022)
Journal of Personality and Social Psychology

Abstract

Moral psychology has long debated whether moral judgment is rooted in harm vs. affect. We reconcile this debate with the Affective Harm Account (AHA) of moral judgment. The AHA understands harm as an intuitive perception (i.e., perceived harm), and divides “affect” into two: embodied visceral arousal (i.e., gut feelings) and stimulus-directed affective appraisals (e.g., ratings of disgustingness). The AHA was tested in a randomized, double-blind pharmacological experiment with healthy young adults judging the immorality, harmfulness, and disgustingness of everyday moral scenarios (e.g., lying) and unusual purity scenarios (e.g., sex with a corpse) after receiving either a placebo or the beta-blocker propranolol (a drug that dampens visceral arousal). Results confirmed the three key hypotheses of the AHA. First, perceived harm and affective appraisals are neither competing nor independent but intertwined. Second, although
both perceived harm and affective appraisals predict moral judgment, perceived harm is consistently relevant across all scenarios (in line with the Theory of Dyadic Morality), whereas affective appraisals are especially relevant in unusual purity scenarios (in line with affect-as-information theory). Third, the “gut feelings” of visceral arousal are not as important to morality as often believed. Dampening visceral arousal (via propranolol) did not directly impact moral judgment, but instead changed the relative contribution of affective appraisals to moral judgment—and only in unusual purity scenarios. By embracing a constructionist view of the mind that blurs traditional dichotomies, the AHA reconciles historic harm-centric and current affect-centric theories, parsimoniously explaining judgment differences across various moral scenarios without requiring any “moral foundations.”

Discussion

Moral psychology has long debated whether moral judgment is grounded in affect or harm. Seeking to reconcile these apparently competing perspectives, we have proposed an Affective Harm Account (AHA) of moral judgment. This account is conciliatory because it highlights the importance of both perceived harm and affect, not as competing considerations but as joint partners—two different horses yoked together pulling the cart of moral judgment.

The AHA also adds clarity to the previously murky nature of “affect” in moral psychology, differentiating it both in nature and measurement as (at least) two phenomena—embodied, free-floating, visceral arousal (i.e., “gut feelings”) and self-reported, context-bound, affective appraisals (i.e., “this situation is gross”). The importance of affect in moral judgment—especially the “gut feelings” of visceral arousal—was tested via administration of propranolol, which dampens visceral arousal via beta-adrenergic receptor blockade. Importantly, propranolol allows us to manipulate more general visceral arousal (rather than targeting a specific organ, like the gut, or a specific state, like nausea). This increases the potential generalizability of these findings to other moral scenarios (beyond disgust) where visceral arousal might be relevant. We measured the effect of propranolol (vs. placebo) on ratings of moral condemnation, perceived harm, and affective appraisals (i.e., operationalized as ratings of disgust, as in much past work). These ratings were obtained for both everyday moral scenarios (Hofmann et al., 2018)—which are dyadic in structure and thus obviously linked to harm—and for unusual purity scenarios, which are frequently linked to affective appraisals of disgust (Horberg et al., 2009). This study offers support for the three hypotheses of the AHA.

Monday, April 11, 2022

Distinct neurocomputational mechanisms support informational and socially normative conformity

Mahmoodi A, Nili H, et al.
(2022) PLoS Biol 20(3): e3001565. 
https://doi.org/10.1371/journal.pbio.3001565

Abstract

A change of mind in response to social influence could be driven by informational conformity to increase accuracy, or by normative conformity to comply with social norms such as reciprocity. Disentangling the behavioural, cognitive, and neurobiological underpinnings of informational and normative conformity have proven elusive. Here, participants underwent fMRI while performing a perceptual task that involved both advice-taking and advice-giving to human and computer partners. The concurrent inclusion of 2 different social roles and 2 different social partners revealed distinct behavioural and neural markers for informational and normative conformity. Dorsal anterior cingulate cortex (dACC) BOLD response tracked informational conformity towards both human and computer but tracked normative conformity only when interacting with humans. A network of brain areas (dorsomedial prefrontal cortex (dmPFC) and temporoparietal junction (TPJ)) that tracked normative conformity increased their functional coupling with the dACC when interacting with humans. These findings enable differentiating the neural mechanisms by which different types of conformity shape social changes of mind.

Discussion

A key feature of adaptive behavioural control is our ability to change our mind as new evidence comes to light. Previous research has identified dACC as a neural substrate for changes of mind in both nonsocial situations, such as when receiving additional evidence pertaining to a previously made decision, and social situations, such as when weighing up one’s own decision against the recommendation of an advisor. However, unlike the nonsocial case, the role of dACC in social changes of mind can be driven by different, and often competing, factors that are specific to the social nature of the interaction. In particular, a social change of mind may be driven by a motivation to be correct, i.e., informational influence. Alternatively, a social change of mind may be driven by reasons unrelated to accuracy—such as social acceptance—a process called normative influence. To date, studies on the neural basis of social changes of mind have not disentangled these processes. It has therefore been unclear how the brain tracks and combines informational and normative factors.

Here, we leveraged a recently developed experimental framework that separates humans’ trial-by-trial conformity into informational and normative components to unpack the neural basis of social changes of mind. On each trial, participants first made a perceptual estimate and reported their confidence in it. In support of our task rationale, we found that, while participants’ changes of mind were affected by confidence (i.e., informational) in both human and computer settings, they were only affected by the need to reciprocate influence (i.e., normative) specifically in the human–human setting. It should be noted that participants’ perception of their partners’ accuracy is also an important factor in social change of mind (we tend to change our mind towards the more accurate participants). 

Sunday, April 10, 2022

The habituation fallacy: Disaster victims who are repeatedly victimized are assumed to suffer less, and they are helped less

Hanna Zagefka
European Journal of Social Psychology
First published: 09 February 2022

Abstract

This paper tests the effects of lay beliefs that disaster victims who have been victimized by other events in the past will cope better with a new adverse event than first-time victims. It is shown that believing that disaster victims can get habituated to suffering reduces helping intentions towards victims of repeated adversity, because repeatedly victimized victims are perceived to be less traumatized by a new adverse event. In other words, those who buy into habituation beliefs will impute less trauma and suffering to repeated victims compared to first-time victims, and they will therefore feel less inclined to help those repeatedly victimized victims. This was demonstrated in a series of six studies, two of which were preregistered (total N = 1,010). Studies 1, 2 and 3 showed that beliefs that disaster victims become habituated to pain do indeed exist among lay people. Such beliefs are factually inaccurate, because repeated exposure to severe adversity makes it harder, not easier, for disaster victims to cope with a new negative event. Therefore, we call this belief the ‘habituation fallacy’. Studies 2, 3 and 4 demonstrated an indirect negative effect of a belief in the ‘habituation fallacy’ on ‘helping intentions’, via lesser ‘trauma’ ascribed to victims who had previously been victimized. Studies 5 and 6 demonstrated that a belief in the ‘habituation fallacy’ causally affects trauma ascribed to, and helping intentions towards, repeatedly victimized victims, but not first-time victims. The habituation fallacy can potentially explain reluctance to donate to humanitarian causes in those geographical areas that frequently fall prey to disasters.

From the General Discussion

Taken together, these studies show a tendency to believe in the habituation fallacy. That is, they might believe that victims who have previously suffered are less affected by new adversity than victims who are first-time sufferers. Buy-in to the habituation fallacy means that victims of repeated adversity are assumed to suffer less, and that they are consequently helped less. Consistent evidence for this was found across
six studies, two of which were preregistered.

These results are important and add to the extant literature in significant ways.  Many factors have been discussed as driving disaster giving (see e.g., Albayrak, Aydemir, & Gleibs, 2021; Bekkers & Wiepking, 2011; Berman et al., 2018; Bloom, 2017; Cuddy et al., 2007; Dickert et al., 2011; Evangelidis & Van den Bergh, 2013; Hsee et al., 2013; Kogut, 2011; Kogut et al., 2015; van Leeuwen & Täuber, 2012; Zagefka & James, 2015).  Significant perceived suffering caused by an event is clearly a powerful factor that propels donors into action. However, although lay beliefs about disasters have been studied, lay beliefs about suffering by the victims have been neglected so far. Moreover, although clearly some areas of the world are visited more frequently by disasters than others, the potential effects of this on helping decisions have not previously been studied.

The present paper therefore addresses an important gap, by linking lay beliefs about disasters to both perceived previous victimization and perceived suffering of the victims.  Clearly, helping decisions are driven by emotional and often biased factors (Bloom, 2017), and this contribution sheds light on an important mechanism that is both affective and potentially biased in nature, thereby advancing our understanding of donor motivations (Chapman et al., 2020). 

Saturday, April 9, 2022

Deciding to be authentic: Intuition is favored over deliberation when authenticity matters

K. Oktar & T. Lombrozo
Cognition
Volume 223, June 2022, 105021

Abstract

Deliberative analysis enables us to weigh features, simulate futures, and arrive at good, tractable decisions. So why do we so often eschew deliberation, and instead rely on more intuitive, gut responses? We propose that intuition might be prescribed for some decisions because people's folk theory of decision-making accords a special role to authenticity, which is associated with intuitive choice. Five pre-registered experiments find evidence in favor of this claim. In Experiment 1 (N = 654), we show that participants prescribe intuition and deliberation as a basis for decisions differentially across domains, and that these prescriptions predict reported choice. In Experiment 2 (N = 555), we find that choosing intuitively vs. deliberately leads to different inferences concerning the decision-maker's commitment and authenticity—with only inferences about the decision-maker's authenticity showing variation across domains that matches that observed for the prescription of intuition in Experiment 1. In Experiment 3 (N = 631), we replicate our prior results and rule out plausible confounds. Finally, in Experiment 4 (N = 177) and Experiment 5 (N = 526), we find that an experimental manipulation of the importance of authenticity affects the prescribed role for intuition as well as the endorsement of expert human or algorithmic advice. These effects hold beyond previously recognized influences on intuitive vs. deliberative choice, such as computational costs, presumed reliability, objectivity, complexity, and expertise.

From the Discussion section

Our theory and results are broadly consistent with prior work on cross-domain variation in processing preferences (e.g., Inbar et al., 2010), as well as work showing that people draw social inferences from intuitive decisions (e.g., Tetlock, 2003). However, we bridge and extend these literatures by relating inferences made on the basis of an individual's decision to cross-domain variation in the prescribed roles of intuition and deliberation. Importantly, our work is unique in showing that neither judgments about how decisions ought to be made, nor inferences from decisions, are fully reducible to considerations of differential processing costs or the reliability of a given process for the case at hand. Our stimuli—unlike those used in prior work (e.g., Inbar et al., 2010; Pachur & Spaar, 2015)—involved deliberation costs that had already been incurred at the time of decision, yet participants nevertheless displayed substantial and systematic cross-domain variation in their inferences, processing judgments, and eventual decisions. Most dramatically, our matched-information scenarios in Experiment 3 ensured that effects were driven by decision basis alone. In addition to excluding the computational costs of deliberation and matching the decision to deliberate, these scenarios also matched the evidence available concerning the quality of each choice. Nonetheless, decisions that were based on intuition vs. deliberation were judged differently along a number of dimensions, including their authenticity.

Friday, April 8, 2022

What predicts suicidality among psychologists? An examination of risk and resilience

S. Zuckerman, O. R. Lightsey Jr. & J. White
Death Studies (2022)
DOI: 10.1080/07481187.2022.2042753

Abstract

Psychologists may have a uniquely high risk for suicide. We examined whether, among 172 psychologists, factors predicting suicide risk among the general population (e.g., gender and mental illness), occupational factors (e.g., burnout and secondary traumatic stress), and past trauma predicted suicidality. We also tested whether resilience and meaning in life were negatively related to suicidality and whether resilience buffered relationships between risk factors and suicidality. Family history of mental illness, number of traumas, and lifetime depression/anxiety predicted higher suicidality, whereas resilience predicted lower suicidality. At higher levels of resilience, the relationship between family history of suicide and suicidality was stronger.

From the Discussion section:

Contrary to hypotheses, however, resilience did not consistently buffer the relationship between vulnerability factors and suicidality. Indeed, resilience appeared to strengthen the relationships between having a family history of suicide and suicidality. It is plausible that psychologists may overestimate their resilience or believe that they “should” be resilient given their training or their helping role (paralleling burnout-related themes identified in the culture of medicine, “show no weakness” and “patients come first;” see Williams et al., 2020, p. 820). Similarly, persons who believe that they are generally resilient may be demoralized by their inability to prevent family history of suicide from negatively affecting them, and this demoralization may result in family history of suicide being a particularly strong predictor among these individuals. Alternatively, this result could stem from the BRS, which may not measure components of resilience that protect against suicidality, or it could be an artifact of small sample size and low power for detecting moderation (Frazier et al., 2004). Of course, interaction terms are symmetric, and the resilience x family history of suicide interaction can also be interpreted to mean that family history of suicide strengthens the relationship between resilience and suicidality: When there is a family history of suicide, resilience has a positive relationship with suicidality whereas, when there is no family history of suicide, resilience has a negative relationship with suicidality.

Thursday, April 7, 2022

How to Prevent Robotic Sociopaths: A Neuroscience Approach to Artificial Ethics

Christov-Moore, L., Reggente, N.,  et al.
https://doi.org/10.31234/osf.io/6tn42

Abstract

Artificial intelligence (AI) is expanding into every niche of human life, organizing our activity, expanding our agency and interacting with us to an increasing extent. At the same time, AI’s efficiency, complexity and refinement are growing quickly. Justifiably, there is increasing concern with the immediate problem of engineering AI that is aligned with human interests.

Computational approaches to the alignment problem attempt to design AI systems to parameterize human values like harm and flourishing, and avoid overly drastic solutions, even if these are seemingly optimal. In parallel, ongoing work in service AI (caregiving, consumer care, etc.) is concerned with developing artificial empathy, teaching AI’s to decode human feelings and behavior, and evince appropriate, empathetic responses. This could be equated to cognitive empathy in humans.

We propose that in the absence of affective empathy (which allows us to share in the states of others), existing approaches to artificial empathy may fail to produce the caring, prosocial component of empathy, potentially resulting in superintelligent, sociopath-like AI. We adopt the colloquial usage of “sociopath” to signify an intelligence possessing cognitive empathy (i.e., the ability to infer and model the internal states of others), but crucially lacking harm aversion and empathic concern arising from vulnerability, embodiment, and affective empathy (which permits for shared experience). An expanding, ubiquitous intelligence that does not have a means to care about us poses a species-level risk.

It is widely acknowledged that harm aversion is a foundation of moral behavior. However, harm aversion is itself predicated on the experience of harm, within the context of the preservation of physical integrity. Following from this, we argue that a “top-down” rule-based approach to achieving caring, aligned AI may be unable to anticipate and adapt to the inevitable novel moral/logistical dilemmas faced by an expanding AI. It may be more effective to cultivate prosociality from the bottom up, baked into an embodied, vulnerable artificial intelligence with an incentive to preserve its real or simulated physical integrity. This may be achieved via optimization for incentives and contingencies inspired by the development of empathic concern in vivo. We outline the broad prerequisites of this approach and review ongoing work that is consistent with our rationale.

If successful, work of this kind could allow for AI that surpasses empathic fatigue and the idiosyncrasies, biases, and computational limits of human empathy. The scaleable complexity of AI may allow it unprecedented capability to deal proportionately and compassionately with complex, large-scale ethical dilemmas. By addressing this problem seriously in the early stages of AI’s integration with society, we might eventually produce an AI that plans and behaves with an ingrained regard for the welfare of others, aided by the scalable cognitive complexity necessary to model and solve extraordinary problems.