Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, April 25, 2022

Morality just isn't Republicans' thing anymore

Steve Larkin
The Week
Originally posted 23 APR 22

Here is an excerpt:

There is no understanding the Republican Party without understanding its leader and id, former President Donald Trump. His sins and crimes have been enumerated many times. But for the record, the man is a serial adulterer who brags about committing sexual assault with impunity, responsible for three cameo appearances in Playboy videos, dishonest in his business dealings, and needlessly callow and cruel. And, finally, he claims that he has never asked God for forgiveness for any of this.

Trump's presidency would seem to have vindicated the Southern Baptist Convention's claim that "tolerance of serious wrong by leaders sears the conscience of the culture, spawns unrestrained immorality and lawlessness in the society, and surely results in God's judgment." Of course, that was about former President Bill Clinton and the Monica Lewinsky scandal. Now, tolerating this sort of behavior in a leader is par for the Republican Party course.

And Trump seems to have set a kind of example for other stars of the MAGAverse: Rep. Matt Gaetz is under investigation for paying for sex with an underage girl and sex trafficking; former Missouri Gov. Eric Greitens, who was forced to resign that post after accusations that he tried to use nude photos to blackmail a woman with whom he had an affair, has not let that stop him from running for the Senate; Rep. Madison Cawthorn has been accused of sexual harassment and other misconduct by women who were his classmates in college.

Democrats, of course, have their own fair share of scandals, criminals, and cads, and they see themselves as being on the moral side, too. But they're not running around championing those "traditional values."

Why do Republicans thrill to Trump and tolerate misbehavior which previous generations — maybe even the very same people, a few decades ago — would have viewed as immediately disqualifying? (A long time ago, Ronald Reagan being divorced and remarried was a serious problem for a small but noticeable group of voters.) Maybe it's because, while Trump is an extreme (and rich) example, in many ways he's not so different from his devotees.

Sunday, April 24, 2022

Individual vulnerability to industrial robot adoption increases support for the radical right

Anelli, M., Colantone, I., & Stanig, P. 
(2021). PNAS, 118(47), e2111611118.
https://doi.org/10.1073/pnas.2111611118

Significance

The success of radical-right parties across western Europe has generated much concern. These parties propose making borders less permeable, oppose ethnic diversity, and often express impatience with the institutions of representative democracy. Part of their recent success has been shown to be driven by structural economic changes, such as globalization, which triggers distributional consequences that, in turn, translate into voting behavior. We ask what are the political consequences of a different structural change: robotization of manufacturing. We propose a measure of individual exposure to automation and show that individuals more vulnerable to negative consequences of automation tend to display more support for the radical right. Automation exposure raises support for the radical left too, but to a significantly lower extent.

Abstract

The increasing success of populist and radical-right parties is one of the most remarkable developments in the politics of advanced democracies. We investigate the impact of industrial robot adoption on individual voting behavior in 13 western European countries between 1999 and 2015. We argue for the importance of the distributional consequences triggered by automation, which generates winners and losers also within a given geographic area. Analysis that exploits only cross-regional variation in the incidence of robot adoption might miss important facets of this process. In fact, patterns in individual indicators of economic distress and political dissatisfaction are masked in regional-level analysis, but can be clearly detected by exploiting individual-level variation. We argue that traditional measures of individual exposure to automation based on the current occupation of respondents are potentially contaminated by the consequences of automation itself, due to direct and indirect occupational displacement. We introduce a measure of individual exposure to automation that combines three elements: 1) estimates of occupational probabilities based on employment patterns prevailing in the preautomation historical labor market, 2) occupation-specific automatability scores, and 3) the pace of robot adoption in a given country and year. We find that individuals more exposed to automation tend to display higher support for the radical right. This result is robust to controlling for several other drivers of radical-right support identified by earlier literature: nativism, status threat, cultural traditionalism, and globalization. We also find evidence of significant interplay between automation and these other drivers.

Conclusion

We study the effects of robot adoption on voting behavior in western Europe. We find that higher exposure to automation increases support for radical-right parties. We argue that an individual-level analysis of vulnerability to automation is required, given the prominent role played by the distributional effects of automation unfolding within geographic areas. We also argue that measures of automation exposure based on an individual’s current occupation, as used in previous studies, are potentially problematic, due to direct and indirect displacement induced by automation. We then propose an approach that combines individual observable features with historical labor-market data. Our paper provides further evidence on the material drivers behind the increasing support for the radical right. At the same time, it takes into account the role of cultural factors and shows evidence of their interplay with automation in explaining the political realignment witnessed by advanced Western democracies.

Saturday, April 23, 2022

Historical Fundamentalism? Christian Nationalism and Ignorance About Religion in American Political History

S. L. Perry, R. Braunstein, et al.
Journal for the Scientific Study of Religion 
(2022) 61(1):21–40

Abstract

Religious right leaders often promulgate views of Christianity's historical preeminence, privilege, and persecution in the United States that are factually incorrect, suggesting credulity, ignorance, or perhaps, a form of ideologically motivated ignorance on the part of their audience. This study examines whether Christian nationalism predicts explicit misconceptions regarding religion in American political history and explores theories about the connection. Analyzing nationally representative panel data containing true/false statements about religion's place in America's founding documents, policies, and court decisions, Christian nationalism is the strongest predictor that Americans fail to affirm factually correct answers. This association is stronger among whites compared to black Americans and religiosity actually predicts selecting factually correct answers once we account for Christian nationalism. Analyses of “do not know” response patterns find more confident correct answers from Americans who reject Christian nationalism and more confident incorrect answers from Americans who embrace Christian nationalism. We theorize that, much like conservative Christians have been shown to incorrectly answer science questions that are “religiously contested,” Christian nationalism inclines Americans to affirm factually incorrect views about religion in American political history, likely through their exposure to certain disseminators of such misinformation, but also through their allegiance to a particular political-cultural narrative they wish to privilege.

From the Discussion and Conclusions

Our findings extend our understanding of contemporary culture war conflicts in the United States in several key ways. Our finding that Christian nationalist ideology is not only associated with different political or social values (Hunter 1992; Smith 2000), but belief in explicitly wrong historical claims goes beyond issues of subjective interpretation or mere differences of opinion to underscore the reality that Americans are divided by different information. Large groups of Americans hold incompatible beliefs about issues of fact, with those who ardently reject Christian nationalism more likely to confidently and correctly affirm factual claims and Christian nationalists more likely to confidently and incorrectly affirm misinformation. To be sure, we are unable to disentangle directionality here, which in all likelihood operates both ways. Christian nationalist ideology is made plausible by false or exaggerated claims about the evangelical character of the nation’s founders and founding documents. Yet Christian nationalism, as an ideology, may also foster a form of motivated ignorance or credulity toward a variety of factually incorrect statements, among them being the preeminence and growing persecution of Christianity in the UnitedStates.

Closely related to this last point, our findings extend recent research by further underscoring the powerful influence of Christian nationalism as the ideological source of credulity supporting and spreading far-right misinformation.

Friday, April 22, 2022

Generous with individuals and selfish to the masses

Alós-Ferrer, C.; García-Segarra, J.; Ritschel, A.
(2022). Nature Human Behaviour, 6(1):88-96.

Abstract

The seemingly rampant economic selfishness suggested by many recent corporate scandals is at odds with empirical results from behavioural economics, which demonstrate high levels of prosocial behaviour in bilateral interactions and low levels of dishonest behaviour. We design an experimental setting, the ‘Big Robber’ game, where a ‘robber’ can obtain a large personal gain by appropriating the earnings of a large group of ‘victims’. In a large laboratory experiment (N = 640), more than half of all robbers took as much as possible and almost nobody declined to rob. However, the same participants simultaneously displayed standard, predominantly prosocial behaviour in Dictator, Ultimatum and Trust games. Thus, we provide direct empirical evidence showing that individual selfishness in high-impact decisions affecting a large group is compatible with prosociality in bilateral low-stakes interactions. That is, human beings can simultaneously be generous with others and selfish with large groups.

From the Discussion

Our results demonstrate that socially-relevant selfishness in the large is fully compatible with evidence from experimental economics on bilateral, low-stake games at the individual level, without requiring arguments relying on population differences (in fact, we found no statistically significant differences in the behavior of participants with or without an economics background). The same individuals can behave selfishly when interacting with a large group of other people while, at the same time, displaying standard levels of prosocial behavior in commonly-used laboratory tasks where only one other individual is involved. Additionally, however, individual differences in behavior in the Big Robber Game correlate with individual selfishness in the DG/UG/TG, i.e., Extreme Robbers gave less in the DG, offered less in the UG, and transferred less in the TG than Moderate Robbers.

The finding that people behave selfishly toward a large group while being generous toward individuals suggests that harming many individuals might be easier than harming just one, in line with received evidence that people are more willing to help one individual than many. It also reflects the tradeoff between personal gain and other-regarding concerns encompassed in standard models of social preferences, although this particular implication had not been demonstrated so far. When facing a single opponent in a bilateral game, appropriating a given monetary amount can result in a large interpersonal difference. When appropriating income from a large group of people, the same personal gain involves a smaller percentual difference. Correspondingly, creating a given level of inequality with respect to others results in a much larger personal gain when income is taken from a group than when it is taken from just another person, and hence it is much more likely to offset the disutility from inequality aversion in the former case.

Thursday, April 21, 2022

Social identity switching: How effective is it?

A. K. Zinn, A. Lavrica, M. Levine, & M. Koschate
Journal of Experimental Social Psychology
Volume 101, July 2022, 104309

Abstract

Psychological theories posit that we frequently switch social identities, yet little is known about the effectiveness of such switches. Our research aims to address this gap in knowledge by determining whether – and at what level of integration into the self-concept – a social identity switch impairs the activation of the currently active identity (“identity activation cost”). Based on the task-switching paradigm used to investigate task-set control, we prompted social identity switches and measured identity salience in a laboratory study using sequences of identity-related Implicit Association Tests (IATs). Pilot 1 (N = 24) and Study 1 (N = 64) used within-subjects designs with participants completing several social identity switches. The IAT congruency effect was no less robust after identity switches compared to identity repetitions, suggesting that social identity switches were highly effective. Study 2 (N = 48) addressed potential differences for switches between identities at different levels of integration into the self. We investigated whether switches between established identities are more effective than switches from a novel to an established identity. While response times showed the predicted trend towards a smaller IAT congruency effect after switching from a novel identity, we found a trend towards the opposite pattern for error rates. The registered study (N = 144) assessed these conflicting results with sufficient power and found no significant difference in the effectiveness of switching from novel as compared to established identities. An effect of cross-categorisation in the registered study was likely due to the requirement to learn individual stimuli.

General discussion

The main aim of the current investigation was to determine the effectiveness of social identity switching. We assessed whether social identity switches lead to identity activation costs (impaired activation of the next identity) and whether social identity switches are less effective for novel than for well-established identities. The absence of an identity activation costs in our results indicates that identity switching is effective. This has important theoretical implications by lending empirical support to self-categorisation theory that states that social identity switches are “inherently variable, fluid, and context dependent” (Turner et al., 1994, p. 454).

To our knowledge, our investigation is the first approach that has employed key aspects of the task switching paradigm to learn about the process of social identity switching. The potential cost of an identity switch also has important practical implications. Like task switches, social identity switches are ubiquitous. Technological developments over the last decades have resulted in different social identities being only “a click away” from becoming salient. We can interact with (and receive) information about different social identities on a permanent basis, wherever we are - by scrolling through social media, reading news on our smartphone, receiving emails and instant messages, often in rapid succession. The literature on task switch costs has changed the way we view “multi-tasking” by providing a better understanding of its impact on task performance and task selection. Similarly, our research has important practical implications for how well people can deal with frequent and rapid social identity switches.

Wednesday, April 20, 2022

The human black-box: The illusion of understanding human better than algorithmic decision-making

Bonezzi, A., Ostinelli, M., & Melzner, J. (2022). 
Journal of Experimental Psychology: General.

Abstract

As algorithms increasingly replace human decision-makers, concerns have been voiced about the black-box nature of algorithmic decision-making. These concerns raise an apparent paradox. In many cases, human decision-makers are just as much of a black-box as the algorithms that are meant to replace them. Yet, the inscrutability of human decision-making seems to raise fewer concerns. We suggest that one of the reasons for this paradox is that people foster an illusion of understanding human better than algorithmic decision-making, when in fact, both are black-boxes. We further propose that this occurs, at least in part, because people project their own intuitive understanding of a decision-making process more onto other humans than onto algorithms, and as a result, believe that they understand human better than algorithmic decision-making, when in fact, this is merely an illusion.

General Discussion

Our work contributes to prior literature in two ways. First, it bridges two streams of research that have thus far been considered in isolation: IOED (Illusion of Explanatory Depth) (Rozenblit & Keil, 2002) and projection (Krueger,1998). IOED has mostly been documented for mechanical devices and natural phenomena and has been attributed to people confusing a superficial understanding of what something does for how it does it (Keil, 2003). Our research unveils a previously unexplored driver ofIOED, namely, the tendency to project one’s own cognitions on to others, and in so doing extends the scope of IOED to human deci-sion-making. Second, our work contributes to the literature on clinical versus statistical judgments (Meehl, 1954). Previous research shows that people tend to trust humans more than algorithms (Dietvorst et al., 2015). Among the many reasons for this phenomenon (see Grove & Meehl, 1996), one is that people do not understand how algorithms work (Yeomans et al., 2019). Our research suggests that people’s distrust toward algorithms may stem not only from alack of understanding how algorithms work but also from an illusion of understanding how their human counterparts operate.

Our work can be extended by exploring other consequences and psychological processes associated with the illusion of understand-ing humans better than algorithms. As for consequences, more research is needed to explore how illusory understanding affects trust in humans versus algorithms. Our work suggests that the illusion of understanding humans more than algorithms can yield greater trust in decisions made by humans. Yet, to the extent that such an illusion stems from a projection mechanism, it might also lead to favoring algorithms over humans, depending on the underly-ing introspections. Because people’s introspections can be fraught with biases and idiosyncrasies they might not even be aware of (Nisbett & Wilson, 1977;Wilson, 2004), people might erroneously project these same biases and idiosyncrasies more onto other humans than onto algorithms and consequently trust those humans less than algorithms. To illustrate, one might expect a recruiter to favor people of the same gender or ethnic background just because one may be inclined to do so. In these circumstances, the illusion to understand humans better than algorithms might yield greater trust in algorithmic than human decisions (Bonezzi & Ostinelli, 2021).

Tuesday, April 19, 2022

Diffusion of Punishment in Collective Norm Violations

Keshmirian, A., Hemmatian, B., et al. 
(2022, March 7). PsyArXiv
 https://doi.org/10.31234/osf.io/sjz7r

Abstract

People assign less punishment to individuals who inflict harm collectively, compared to those who do so alone. We show that this arises from judgments of diminished individual causal responsibility in the collective cases. In Experiment 1, participants (N=1002) assigned less punishment to individuals involved in collective actions leading to intentional and accidental deaths, but not failed attempts, emphasizing that harmful outcomes, but not malicious intentions, were necessary and sufficient for the diffusion of punishment. Experiments 2.a compared the diffusion of punishment for harmful actions with ‘victimless’ purity violations (e.g., eating human flesh in groups; N=752). In victimless cases, where the question of causal responsibility for harm does not arise, diffusion of collective responsibility was greatly reduced—an effect replicated in Experiment 2.b (N= 500). We propose discounting in causal attribution as the underlying cognitive mechanism for reduction in proposed punishment for collective harmful actions.

From the Discussion

Our findings also bear on theories of moral judgment. First, they support the dissociation of causal and mental-state processes in moral judgment (Cushman, 2008; Rottman & Young, 2019;  Young  et  al.,  2007,  2010).  Second, they  support  disparate judgment processes  for harmful versus "victimless" moral violations (Chakroff et al., 2013, 2017; Dungan et al., 2017; Giner-Sorolla & Chapman, 2017; Rottman & Young, 2019). Third, they reinforce the idea that punishment often involves a "backward-looking" retributive focus on responsibility, rather than a "forwards-looking" focus on rehabilitation, incapacitation, or deterrence (which, we presume, would generally favor treating solo and group actors equivalently). Punishers' future-oriented  self-serving  motives  and  their  evolutionary  roots need  further  investigation as alternative sources for  punishment  diffusion.  For  instance,  punishing  joint  violators may produce more enemies for the punisher, reducing the motivation for a severe response.

Whether the diffusion of punishment and our causal explanation for it extends to other moral domains (e.g., fairness; Graham et al., 2011)is a topic for future research. Another interesting extension is whether different causal structures produce different effects on judgments. Our vignettes  were  intentionally  ambiguous  about  causal  chains  and  whether  multiple  agents overdetermined  the  harmful  outcome.  Contrasting  diffusion  in  conjunctive  moral norm violation (when collaboration is necessary for violation) with disjunctive ones (when one individual would suffice)isinformative, since attributions of responsibility are generally higher in the former class(Gerstenberg & Lagnado, 2010; Kelley, 1973; Lagnado et al., 2013; Morris & Larrick, 1995; Shaver, 1985; Zultan et al., 2012).

Monday, April 18, 2022

The psychological drivers of misinformation belief and its resistance to correction

Ecker, U.K.H., Lewandowsky, S., Cook, J. et al. 
Nat Rev Psychol 1, 13–29 (2022).
https://doi.org/10.1038/s44159-021-00006-y

Abstract

Misinformation has been identified as a major contributor to various contentious contemporary events ranging from elections and referenda to the response to the COVID-19 pandemic. Not only can belief in misinformation lead to poor judgements and decision-making, it also exerts a lingering influence on people’s reasoning after it has been corrected — an effect known as the continued influence effect. In this Review, we describe the cognitive, social and affective factors that lead people to form or endorse misinformed views, and the psychological barriers to knowledge revision after misinformation has been corrected, including theories of continued influence. We discuss the effectiveness of both pre-emptive (‘prebunking’) and reactive (‘debunking’) interventions to reduce the effects of misinformation, as well as implications for information consumers and practitioners in various areas including journalism, public health, policymaking and education.

Summary and future directions

Psychological research has built solid foundational knowledge of how people decide what is true and false, form beliefs, process corrections, and might continue to be influenced by misinformation even after it has been corrected. However, much work remains to fully understand the psychology of misinformation.

First, in line with general trends in psychology and elsewhere, research methods in the field of misinformation should be improved. Researchers should rely less on small-scale studies conducted in the laboratory or a small number of online platforms, often on non-representative (and primarily US-based) participants. Researchers should also avoid relying on one-item questions with relatively low reliability. Given the well-known attitude–behaviour gap — that attitude change does not readily translate into behavioural effects — researchers should also attempt to use more behavioural measures, such as information-sharing measures, rather than relying exclusively on self-report questionnaires. Although existing research has yielded valuable insights into how people generally process misinformation (many of which will translate across different contexts and cultures), an increased focus on diversification of samples and more robust methods is likely to provide a better appreciation of important contextual factors and nuanced cultural differences.

Sunday, April 17, 2022

Leveraging artificial intelligence to improve people’s planning strategies

F. Callaway, et al.
PNAS, 2022, 119 (12) e2117432119 

Abstract

Human decision making is plagued by systematic errors that can have devastating consequences. Previous research has found that such errors can be partly prevented by teaching people decision strategies that would allow them to make better choices in specific situations. Three bottlenecks of this approach are our limited knowledge of effective decision strategies, the limited transfer of learning beyond the trained task, and the challenge of efficiently teaching good decision strategies to a large number of people. We introduce a general approach to solving these problems that leverages artificial intelligence to discover and teach optimal decision strategies. As a proof of concept, we developed an intelligent tutor that teaches people the automatically discovered optimal heuristic for environments where immediate rewards do not predict long-term outcomes. We found that practice with our intelligent tutor was more effective than conventional approaches to improving human decision making. The benefits of training with our cognitive tutor transferred to a more challenging task and were retained over time. Our general approach to improving human decision making by developing intelligent tutors also proved successful for another environment with a very different reward structure. These findings suggest that leveraging artificial intelligence to discover and teach optimal cognitive strategies is a promising approach to improving human judgment and decision making.

Significance

Many bad decisions and their devastating consequences could be avoided if people used optimal decision strategies. Here, we introduce a principled computational approach to improving human decision making. The basic idea is to give people feedback on how they reach their decisions. We develop a method that leverages artificial intelligence to generate this feedback in such a way that people quickly discover the best possible decision strategies. Our empirical findings suggest that a principled computational approach leads to improvements in decision-making competence that transfer to more difficult decisions in more complex environments. In the long run, this line of work might lead to apps that teach people clever strategies for decision making, reasoning, goal setting, planning, and goal achievement.

From the Discussion

We developed an intelligent system that automatically discovers optimal decision strategies and teaches them to people by giving them metacognitive feedback while they are deciding what to do. The general approach starts from modeling the kinds of decision problems people face in the real world along with the constraints under which those decisions have to be made. The resulting formal model makes it possible to leverage artificial intelligence to derive an optimal decision strategy. To teach people this strategy, we then create a simulated decision environment in which people can safely and rapidly practice making those choices while an intelligent tutor provides immediate, precise, and accurate feedback on how they are making their decision. As described above, this feedback is designed to promote metacognitive reinforcement learning.