Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, April 21, 2022

Social identity switching: How effective is it?

A. K. Zinn, A. Lavrica, M. Levine, & M. Koschate
Journal of Experimental Social Psychology
Volume 101, July 2022, 104309

Abstract

Psychological theories posit that we frequently switch social identities, yet little is known about the effectiveness of such switches. Our research aims to address this gap in knowledge by determining whether – and at what level of integration into the self-concept – a social identity switch impairs the activation of the currently active identity (“identity activation cost”). Based on the task-switching paradigm used to investigate task-set control, we prompted social identity switches and measured identity salience in a laboratory study using sequences of identity-related Implicit Association Tests (IATs). Pilot 1 (N = 24) and Study 1 (N = 64) used within-subjects designs with participants completing several social identity switches. The IAT congruency effect was no less robust after identity switches compared to identity repetitions, suggesting that social identity switches were highly effective. Study 2 (N = 48) addressed potential differences for switches between identities at different levels of integration into the self. We investigated whether switches between established identities are more effective than switches from a novel to an established identity. While response times showed the predicted trend towards a smaller IAT congruency effect after switching from a novel identity, we found a trend towards the opposite pattern for error rates. The registered study (N = 144) assessed these conflicting results with sufficient power and found no significant difference in the effectiveness of switching from novel as compared to established identities. An effect of cross-categorisation in the registered study was likely due to the requirement to learn individual stimuli.

General discussion

The main aim of the current investigation was to determine the effectiveness of social identity switching. We assessed whether social identity switches lead to identity activation costs (impaired activation of the next identity) and whether social identity switches are less effective for novel than for well-established identities. The absence of an identity activation costs in our results indicates that identity switching is effective. This has important theoretical implications by lending empirical support to self-categorisation theory that states that social identity switches are “inherently variable, fluid, and context dependent” (Turner et al., 1994, p. 454).

To our knowledge, our investigation is the first approach that has employed key aspects of the task switching paradigm to learn about the process of social identity switching. The potential cost of an identity switch also has important practical implications. Like task switches, social identity switches are ubiquitous. Technological developments over the last decades have resulted in different social identities being only “a click away” from becoming salient. We can interact with (and receive) information about different social identities on a permanent basis, wherever we are - by scrolling through social media, reading news on our smartphone, receiving emails and instant messages, often in rapid succession. The literature on task switch costs has changed the way we view “multi-tasking” by providing a better understanding of its impact on task performance and task selection. Similarly, our research has important practical implications for how well people can deal with frequent and rapid social identity switches.

Wednesday, April 20, 2022

The human black-box: The illusion of understanding human better than algorithmic decision-making

Bonezzi, A., Ostinelli, M., & Melzner, J. (2022). 
Journal of Experimental Psychology: General.

Abstract

As algorithms increasingly replace human decision-makers, concerns have been voiced about the black-box nature of algorithmic decision-making. These concerns raise an apparent paradox. In many cases, human decision-makers are just as much of a black-box as the algorithms that are meant to replace them. Yet, the inscrutability of human decision-making seems to raise fewer concerns. We suggest that one of the reasons for this paradox is that people foster an illusion of understanding human better than algorithmic decision-making, when in fact, both are black-boxes. We further propose that this occurs, at least in part, because people project their own intuitive understanding of a decision-making process more onto other humans than onto algorithms, and as a result, believe that they understand human better than algorithmic decision-making, when in fact, this is merely an illusion.

General Discussion

Our work contributes to prior literature in two ways. First, it bridges two streams of research that have thus far been considered in isolation: IOED (Illusion of Explanatory Depth) (Rozenblit & Keil, 2002) and projection (Krueger,1998). IOED has mostly been documented for mechanical devices and natural phenomena and has been attributed to people confusing a superficial understanding of what something does for how it does it (Keil, 2003). Our research unveils a previously unexplored driver ofIOED, namely, the tendency to project one’s own cognitions on to others, and in so doing extends the scope of IOED to human deci-sion-making. Second, our work contributes to the literature on clinical versus statistical judgments (Meehl, 1954). Previous research shows that people tend to trust humans more than algorithms (Dietvorst et al., 2015). Among the many reasons for this phenomenon (see Grove & Meehl, 1996), one is that people do not understand how algorithms work (Yeomans et al., 2019). Our research suggests that people’s distrust toward algorithms may stem not only from alack of understanding how algorithms work but also from an illusion of understanding how their human counterparts operate.

Our work can be extended by exploring other consequences and psychological processes associated with the illusion of understand-ing humans better than algorithms. As for consequences, more research is needed to explore how illusory understanding affects trust in humans versus algorithms. Our work suggests that the illusion of understanding humans more than algorithms can yield greater trust in decisions made by humans. Yet, to the extent that such an illusion stems from a projection mechanism, it might also lead to favoring algorithms over humans, depending on the underly-ing introspections. Because people’s introspections can be fraught with biases and idiosyncrasies they might not even be aware of (Nisbett & Wilson, 1977;Wilson, 2004), people might erroneously project these same biases and idiosyncrasies more onto other humans than onto algorithms and consequently trust those humans less than algorithms. To illustrate, one might expect a recruiter to favor people of the same gender or ethnic background just because one may be inclined to do so. In these circumstances, the illusion to understand humans better than algorithms might yield greater trust in algorithmic than human decisions (Bonezzi & Ostinelli, 2021).

Tuesday, April 19, 2022

Diffusion of Punishment in Collective Norm Violations

Keshmirian, A., Hemmatian, B., et al. 
(2022, March 7). PsyArXiv
 https://doi.org/10.31234/osf.io/sjz7r

Abstract

People assign less punishment to individuals who inflict harm collectively, compared to those who do so alone. We show that this arises from judgments of diminished individual causal responsibility in the collective cases. In Experiment 1, participants (N=1002) assigned less punishment to individuals involved in collective actions leading to intentional and accidental deaths, but not failed attempts, emphasizing that harmful outcomes, but not malicious intentions, were necessary and sufficient for the diffusion of punishment. Experiments 2.a compared the diffusion of punishment for harmful actions with ‘victimless’ purity violations (e.g., eating human flesh in groups; N=752). In victimless cases, where the question of causal responsibility for harm does not arise, diffusion of collective responsibility was greatly reduced—an effect replicated in Experiment 2.b (N= 500). We propose discounting in causal attribution as the underlying cognitive mechanism for reduction in proposed punishment for collective harmful actions.

From the Discussion

Our findings also bear on theories of moral judgment. First, they support the dissociation of causal and mental-state processes in moral judgment (Cushman, 2008; Rottman & Young, 2019;  Young  et  al.,  2007,  2010).  Second, they  support  disparate judgment processes  for harmful versus "victimless" moral violations (Chakroff et al., 2013, 2017; Dungan et al., 2017; Giner-Sorolla & Chapman, 2017; Rottman & Young, 2019). Third, they reinforce the idea that punishment often involves a "backward-looking" retributive focus on responsibility, rather than a "forwards-looking" focus on rehabilitation, incapacitation, or deterrence (which, we presume, would generally favor treating solo and group actors equivalently). Punishers' future-oriented  self-serving  motives  and  their  evolutionary  roots need  further  investigation as alternative sources for  punishment  diffusion.  For  instance,  punishing  joint  violators may produce more enemies for the punisher, reducing the motivation for a severe response.

Whether the diffusion of punishment and our causal explanation for it extends to other moral domains (e.g., fairness; Graham et al., 2011)is a topic for future research. Another interesting extension is whether different causal structures produce different effects on judgments. Our vignettes  were  intentionally  ambiguous  about  causal  chains  and  whether  multiple  agents overdetermined  the  harmful  outcome.  Contrasting  diffusion  in  conjunctive  moral norm violation (when collaboration is necessary for violation) with disjunctive ones (when one individual would suffice)isinformative, since attributions of responsibility are generally higher in the former class(Gerstenberg & Lagnado, 2010; Kelley, 1973; Lagnado et al., 2013; Morris & Larrick, 1995; Shaver, 1985; Zultan et al., 2012).

Monday, April 18, 2022

The psychological drivers of misinformation belief and its resistance to correction

Ecker, U.K.H., Lewandowsky, S., Cook, J. et al. 
Nat Rev Psychol 1, 13–29 (2022).
https://doi.org/10.1038/s44159-021-00006-y

Abstract

Misinformation has been identified as a major contributor to various contentious contemporary events ranging from elections and referenda to the response to the COVID-19 pandemic. Not only can belief in misinformation lead to poor judgements and decision-making, it also exerts a lingering influence on people’s reasoning after it has been corrected — an effect known as the continued influence effect. In this Review, we describe the cognitive, social and affective factors that lead people to form or endorse misinformed views, and the psychological barriers to knowledge revision after misinformation has been corrected, including theories of continued influence. We discuss the effectiveness of both pre-emptive (‘prebunking’) and reactive (‘debunking’) interventions to reduce the effects of misinformation, as well as implications for information consumers and practitioners in various areas including journalism, public health, policymaking and education.

Summary and future directions

Psychological research has built solid foundational knowledge of how people decide what is true and false, form beliefs, process corrections, and might continue to be influenced by misinformation even after it has been corrected. However, much work remains to fully understand the psychology of misinformation.

First, in line with general trends in psychology and elsewhere, research methods in the field of misinformation should be improved. Researchers should rely less on small-scale studies conducted in the laboratory or a small number of online platforms, often on non-representative (and primarily US-based) participants. Researchers should also avoid relying on one-item questions with relatively low reliability. Given the well-known attitude–behaviour gap — that attitude change does not readily translate into behavioural effects — researchers should also attempt to use more behavioural measures, such as information-sharing measures, rather than relying exclusively on self-report questionnaires. Although existing research has yielded valuable insights into how people generally process misinformation (many of which will translate across different contexts and cultures), an increased focus on diversification of samples and more robust methods is likely to provide a better appreciation of important contextual factors and nuanced cultural differences.

Sunday, April 17, 2022

Leveraging artificial intelligence to improve people’s planning strategies

F. Callaway, et al.
PNAS, 2022, 119 (12) e2117432119 

Abstract

Human decision making is plagued by systematic errors that can have devastating consequences. Previous research has found that such errors can be partly prevented by teaching people decision strategies that would allow them to make better choices in specific situations. Three bottlenecks of this approach are our limited knowledge of effective decision strategies, the limited transfer of learning beyond the trained task, and the challenge of efficiently teaching good decision strategies to a large number of people. We introduce a general approach to solving these problems that leverages artificial intelligence to discover and teach optimal decision strategies. As a proof of concept, we developed an intelligent tutor that teaches people the automatically discovered optimal heuristic for environments where immediate rewards do not predict long-term outcomes. We found that practice with our intelligent tutor was more effective than conventional approaches to improving human decision making. The benefits of training with our cognitive tutor transferred to a more challenging task and were retained over time. Our general approach to improving human decision making by developing intelligent tutors also proved successful for another environment with a very different reward structure. These findings suggest that leveraging artificial intelligence to discover and teach optimal cognitive strategies is a promising approach to improving human judgment and decision making.

Significance

Many bad decisions and their devastating consequences could be avoided if people used optimal decision strategies. Here, we introduce a principled computational approach to improving human decision making. The basic idea is to give people feedback on how they reach their decisions. We develop a method that leverages artificial intelligence to generate this feedback in such a way that people quickly discover the best possible decision strategies. Our empirical findings suggest that a principled computational approach leads to improvements in decision-making competence that transfer to more difficult decisions in more complex environments. In the long run, this line of work might lead to apps that teach people clever strategies for decision making, reasoning, goal setting, planning, and goal achievement.

From the Discussion

We developed an intelligent system that automatically discovers optimal decision strategies and teaches them to people by giving them metacognitive feedback while they are deciding what to do. The general approach starts from modeling the kinds of decision problems people face in the real world along with the constraints under which those decisions have to be made. The resulting formal model makes it possible to leverage artificial intelligence to derive an optimal decision strategy. To teach people this strategy, we then create a simulated decision environment in which people can safely and rapidly practice making those choices while an intelligent tutor provides immediate, precise, and accurate feedback on how they are making their decision. As described above, this feedback is designed to promote metacognitive reinforcement learning.

Saturday, April 16, 2022

Morality, punishment, and revealing other people’s secrets.

Salerno, J. M., & Slepian, M. L. (2022).
Journal of Personality & Social Psychology, 
122(4), 606–633. 
https://doi.org/10.1037/pspa0000284

Abstract

Nine studies represent the first investigation into when and why people reveal other people’s secrets. Although people keep their own immoral secrets to avoid being punished, we propose that people will be motivated to reveal others’ secrets to punish them for immoral acts. Experimental and correlational methods converge on the finding that people are more likely to reveal secrets that violate their own moral values. Participants were more willing to reveal immoral secrets as a form of punishment, and this was explained by feelings of moral outrage. Using hypothetical scenarios (Studies 1, 3–6), two controversial events in the news (hackers leaking citizens’ private information; Study 2a–2b), and participants’ behavioral choices to keep or reveal thousands of diverse secrets that they learned in their everyday lives (Studies 7–8), we present the first glimpse into when, how often, and one explanation for why people reveal others’ secrets. We found that theories of self-disclosure do not generalize to others’ secrets: Across diverse methodologies, including real decisions to reveal others’ secrets in everyday life, people reveal others’ secrets as punishment in response to moral outrage elicited from others’ secrets.

From the Discussion

Our data serve as a warning flag: one should be aware of a potential confidant’s views with regard to the morality of the behavior. Across 14 studies (Studies 1–8; Supplemental Studies S1–S5), we found that people are more likely to reveal other people’s secrets to the degree that they, personally, view the secret act as immoral. Emotional reactions to the immoral secrets explained this effect, such as moral outrage as well as anger and disgust, which were associated correlationally and experimentally with revealing the secret as a form of punishment. People were significantly more likely to reveal the same secret if the behavior was done intentionally (vs. unintentionally), if it had gone unpunished (vs. already punished by someone else), and in the context of a moral framing (vs. no moral framing). These experiments suggest a causal role for both the degree to which the secret behavior is immoral and the participants’ desire to see the behavior punished.  Additionally, we found that this psychological process did not generalize to non-secret information. Although people were more likely to reveal both secret and non-secret information when they perceived it to be more immoral, they did so for different reasons: as an appropriate punishment for the immoral secrets, and as interesting fodder for gossip for the immoral non-secrets.

Friday, April 15, 2022

Strategic identity signaling in heterogeneous networks

T. Van der dos, M. Galesic, et al.
PNAS, 2022.
119 (10) e2117898119

Abstract

Individuals often signal identity information to facilitate assortment with partners who are likely to share norms, values, and goals. However, individuals may also be incentivized to encrypt their identity signals to avoid detection by dissimilar receivers, particularly when such detection is costly. Using mathematical modeling, this idea has previously been formalized into a theory of covert signaling. In this paper, we provide an empirical test of the theory of covert signaling in the context of political identity signaling surrounding the 2020 US presidential elections. To identify likely covert and overt signals on Twitter, we use methods relying on differences in detection between ingroup and outgroup receivers. We strengthen our experimental predictions with additional mathematical modeling and examine the usage of selected covert and overt tweets in a behavioral experiment. We find that participants strategically adjust their signaling behavior in response to the political constitution of their audiences. These results support our predictions and point to opportunities for further theoretical development. Our findings have implications for our understanding of political communication, social identity, pragmatics, hate speech, and the maintenance of cooperation in diverse populations.

Significance

Much of online conversation today consists of signaling one’s political identity. Although many signals are obvious to everyone, others are covert, recognizable to one’s ingroup while obscured from the outgroup. This type of covert identity signaling is critical for collaborations in a diverse society, but measuring covert signals has been difficult, slowing down theoretical development. We develop a method to detect covert and overt signals in tweets posted before the 2020 US presidential election and use a behavioral experiment to test predictions of a mathematical theory of covert signaling. Our results show that covert political signaling is more common when the perceived audience is politically diverse and open doors to a better understanding of communication in politically polarized societies.

From the Discussion

The theory predicts that individuals should use more covert signaling in more heterogeneous groups or when they are in the minority. We found support for this prediction in the ways people shared political speech in a behavioral experiment. We observed the highest levels of covert signaling when audiences consisted almost entirely of cross-partisans, supporting the notion that covert signaling is a strategy for avoiding detection by hostile outgroup members. Of note, we selected tweets for our study at a time of heightened partisan divisions: the four weeks preceding the 2020 US presidential election. Consequently, these tweets mostly discussed the opposing political party. This focus was reflected in our behavioral experiment, in which we did not observe an effect of audience composition when all members were (more or less extreme) copartisans. In that societal context, participants might have perceived the cost of dislikes to be minimal and have likely focused on partisan disputes in their real-life conversations happening around that time. Future work testing the theory of covert signaling should also examine signaling strategies in copartisan conversations during times of salient intragroup political divisions.


Editor's Note: Wondering if this research generalizes into other covert forms of communication during psychotherapy.

Thursday, April 14, 2022

AI won’t steal your job, just make it meaningless

John Danaher
iainews.com
Originally published 18 MAR 22

New technologies are often said to be in danger of making humans redundant, replacing them with robots and AI, and making work disappear altogether. A crisis of identity and purpose might result from that, but Silicon Valley tycoons assure us that a universal basic income could at least take care of people’s material needs, leaving them with plenty of leisure time in which to forge new identities and find new sources of purpose.

This, however, paints an overly simplistic picture. What seems more likely to happen is that new technologies will not make humans redundant at a mass scale, but will change the nature of work, making it worse for many, and sapping the elements that give it some meaning and purpose. It’s the worst of both worlds: we’ll continue working, but our jobs will become increasingly meaningless. 

History has some lessons to teach us here. Technology has had a profound effect on work in the past, not just on the ways in which we carry out our day-to-day labour, but also on how we understand its value. Consider the humble plough. In its most basic form, it is a hand-operated tool, consisting of little more than a pointed stick that scratches a furrow through the soil. This helps a farmer to sow seeds but does little else. Starting in the middle ages, however, more complex, ‘heavy’ ploughs began to be used by farmers in Northern Europe. These heavy ploughs rotated and turned the earth, bringing nutrient rich soils to the surface, and radically altering the productivity of farming. Farming ceased being solely about subsistence. It started to be about generating wealth.

The argument about how the heavy plough transformed the nature of work was advanced by historian Lynn White Jr in his classic study Medieval Technology and Social Change. Writing in the idiom of the early 1960s, he argued that “No more fundamental change in the idea of man’s relation to the soil can be imagined: once man had been part of nature; now he became her exploiter.”

It is easy to trace a line – albeit one that takes a detour through Renaissance mercantilism and the Industrial revolution – from the development of the heavy plough to our modern conception of work. Although work is still an economic necessity for many people, it is not just that. It is something more. We don’t just do it to survive; we do it to thrive. Through our work we can buy into a certain lifestyle and affirm a certain identity. We can develop mastery and cultivate self-esteem; we make a contribution to our societies and a name for ourselves. 

Wednesday, April 13, 2022

Moralization of rationality can stimulate, but intellectual humility inhibits, sharing of hostile conspiratorial rumors.

Marie, A., & Petersen, M. (2022, March 4). 
https://doi.org/10.31219/osf.io/k7u68

Abstract

Many assume that if citizens become more inclined to moralize the values of evidence-based and logical thinking, political hostility and conspiracy theories would be less widespread.  Across two large surveys (N = 3675) run in the U.S.A. of 2021 (one exploratory and one preregistered), we provide the first demonstration that moralization of rationality can actually stimulate the spread of conspiratorial and hostile news. This reflects the fact that the moralization of rationality can be highly interrelated with status seeking, corroborating arguments that self-enhancing strategies often advance hidden behind claims to objectivity and morality. In contrast to moral grandstanding on the issue of rationality, our studies find robust evidence that intellectual humility may immunize people from sharing and believing hostile  conspiratorial news (i.e. the awareness that intuitions are fallible, and that suspending critique is often desirable). All associations generalized to hostile conspiratorial news both “fake” and anchored in real events.

General Discussion

Many observers assume that citizens more morally sensitized to the values of evidence-based and methodic thinking would be better protected from the perils of political polarization, conspiracy theories, and “fake news.”Yet, attention to the discourse of individuals who pass along politically hostile and conspiratorial claims suggests that they often sincerely believe to be free and independent “critical thinkers”, and to care more about“facts” than the “unthinking sheep” to which they assimilate most of the population (Harambam & Aupers, 2017).

Across two  large online  surveys (N  = 3675) conducted in  the  context  of the  highly polarized  U.S.A. of  2021, we provide the first piece of evidence that moralizing  epistemic rationality—a motivation   for   rationality defined in the abstract—may stimulate the dissemination of hostile conspiratorial views. Specifically, respondents who reported viewing the grounding of one’s beliefs in evidence and logic as amoral virtue(Ståhl et al., 2016) were more  likely to share hostile conspiratorial news to their political  opponents on social  media than individuals low on  this trait.  Importantly, the effect generalized to two types of news stories overtly targeting the  participant’s outgroup: (false) news making entirely fabricated.