Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, April 30, 2022

Which moral exemplars inspire prosociality?

Han, H., Workman, C. I., May, J., et al.
(2022, January 16). PsyArXiv
https://doi.org/10.1080/09515089.2022.2035343

Abstract

Some stories of moral exemplars motivate us to emulate their admirable attitudes and behaviors, but why do some exemplars motivate us more than others? We systematically studied how motivation to emulate is influenced by the similarity between a reader and an exemplar in social or cultural background (Relatability) and how personally costly or demanding the exemplar’s actions are (Attainability). Study 1 found that university students reported more inspiration and related feelings after reading true stories about the good deeds of a recent fellow alum, compared to a famous moral exemplar from decades past. Study 2A developed a battery of short moral exemplar stories that more systematically varied Relatability and Attainability, along with a set of non-moral exemplar stories for comparison. Studies 2B and 2C examined the path from the story type to relatively low stakes altruism (donating to charity and intentions to volunteer) through perceived attainability and relatability, as well as elevation and pleasantness. Together, our studies suggest that it is primarily the relatability of the moral exemplars, not the attainability of their actions, that inspires more prosocial motivation, at least regarding acts that help others at a relatively low cost to oneself.

General Discussion

Stories can describe moral exemplars who are more or less similar to the reader (relatability) and who engage in acts that are more or less difficult to emulate (attainability). The overarching aim of this research was to address whether prosocial motivation is increased by greater attainability, relatability, or both. Overall, as predicted, more relatable and attainable exemplar stories generate greater inspiration (Study 1) and emulation of prosociality on some measures (Study 2), with perceived relatability being most influential. We developed a battery of ecologically valid exemplar stories that systematically varied attainability and relatability. Although differences in our story types did not produce detectable changes in prosocial behavior, perceived attainability and relatability are highly relative to the individual and thus difficult to systematically manipulate for all or even most participants. For instance, the average American might relate little to a Russian retiree, while others in our studies might do so easily (e.g., if their parents grew up in the Soviet Union). Similarly, donating $50 USD to charity is a major sacrifice for some Americans but not others. So, it was important for us to directly examine the effects of perceived attainability and relatability on prosociality.

The path analyses conducted in Studies 2B and 2C suggest in particular that the perceived relatability—not attainability—of a moral exemplar tends to increase emulation among readers.  The more attainable stories and perceived attainability did not positively predict emotional and behavioral outcomes, but the more relatable stories and perceived relatability did. This suggests that the relatability of exemplars is more fundamental in motivating people compared with the attainability of their acts. Another possibility is that highly attainable moral actions require little personal sacrifice, such as donating $1 to a charity, which is not particularly inspiring and in some cases is perhaps even seen as insulting (compare Thomson and Siegel 2013). Further research could explore these possibilities.

Friday, April 29, 2022

Navy Deputizes Psychologists to Enforce Drug Rules Even for Those Seeking Mental Health Help

Konstantin Toropin
MilitaryTimes.com
Originally posted 18 APR 22

In the wake of reports that a Navy psychologist played an active role in convicting for drug use a sailor who had reached out for mental health assistance, the service is standing by its policy, which does not provide patients with confidentiality and could mean that seeking help has consequences for service members.

The case highlights a set of military regulations that, in vaguely defined circumstances, requires doctors to inform commanding officers of certain medical details, including drug tests, even if those tests are conducted for legitimate medical reasons necessary for adequate care. Allowing punishment when service members are looking for help could act as a deterrent in a community where mental health is still a taboo topic among many, despite recent leadership attempts to more openly discuss getting assistance.

On April 11, Military.com reported the story of a sailor and his wife who alleged that the sailor's command, the destroyer USS Farragut, was retaliating against him for seeking mental health help.

Jatzael Alvarado Perez went to a military hospital to get help for his mental health struggles. As part of his treatment, he was given a drug test that came back positive for cannabinoids -- the family of drugs associated with marijuana. Perez denies having used any substances, but the test resulted in a referral to the ship's chief corpsman.

Perez's wife, Carli Alvarado, shared documents with Military.com that were evidence in the sailor's subsequent nonjudicial punishment, showing that the Farragut found out about the results because the psychologist emailed the ship's medical staff directly, according to a copy of the email.

"I'm not sure if you've been tracking, but OS2 Alvarado Perez popped positive for cannabis while inpatient," read the email, written to the ship's medical chief. Navy policy prohibits punishment for a positive drug test when administered as part of regular medical care.

The email goes on to describe efforts by the psychologist to assist in obtaining a second test -- one that could be used to punish Perez.

"We are working to get him a command directed urinalysis through [our command] today," it added.

Thursday, April 28, 2022

The State of Florida v. Kelvin Lee Coleman Jr.: the implications of neuroscience in the courtroom through a case study

P. Loizidou, R. E. Wieczorek-Fynn, & J. C. Wu
Psychology, Crime & Law 
Published online: 17 Mar 2022

Abstract

Neuroscience can provide evidence in some cases of legal matters, despite its tenuous nature. Among others, arguing for diminished capacity, insanity, or pleading for mitigation is the most frequent use of neurological evidence in the courtroom. While there is a plethora of studies discussing the moral and legal matters of the practice, there is a lack of studies examining specific cases and the subsequent applications of brain knowledge. This study details the capital punishment trial of Kelvin Lee Coleman Jr., charged in 2013 with double murder in Tampa, Florida, to illustrate the extent that expert opinions – based on neuroimaging, neurological, and neuropsychiatric examinations – had an impact on the court’s decisions. The defendant was sentenced to life imprisonment without the possibility of parole. According to the comments of the trial’s jury, the most influential reason for not sentencing the defendant to death is the fact that during the incident was that he was under extreme mental and emotional disturbance. Other reasons were evidence of brain abnormalities resulting from neurological insult, fetal alcohol syndrome, and orbitofrontal syndrome contributing to severely abnormal behavior and lack of impulse control.

Discussion

While this study addresses a single case, similar cases have in the past reached similar rulings. The evasion of death sentences has been especially common after the Hurst v. State decision requiring an anonymous jury vote before sentencing defendants to death. One such case is the State of Florida v. Luis Toledo which took place in 2017. The defendant was not sent to death despite killing his wife and her two children because of the mitigation claims of neurological illness and epilepsy. Similarly, in the case of State of Florida v. Byron Burch in 2015, first-degree murder and burglary charged to a defendant with a lengthy criminal record, did not result in a death sentence due to mitigating evidence of brain damage and presumptive chronic traumatic encephalopathy (CTE). Both cases had PET neuroimaging analyzed by one of the coauthors (JCW). After quantitative electroencephalography was inadmissible to the court, the defense attorney presented PET scans to claim brain damage that hindered impulse control. The jurors decided on a sentence to life in prison without parole which the judge ultimately decided upon.

While evidence on the existence of brain damage seems to suffice in some cases for a sentence to life in prison without parole, it is of interest to examine further the link between brain damage and criminality. This is linked to two questions: First, how common is brain damage among criminals, and second, how much does brain damage impact the likeliness of criminality. Several studies have looked at the prevalence of brain damage and psychiatric disorders among incarcerated individuals. Mental illness is significantly over-represented in death-row samples relative to the general population (Cunningham & Vigen, 2002). TBI is particularly prevalent among criminals, with one study reporting that 82% of the 164 incarcerated individuals interviewed have sustained a TBI, 79% have sustained a TBI with loss of consciousness, and 43% have sustained more than four or more TBIs (Schofield et al., 2006). Other psychiatric conditions like depression, mania, and schizophrenia have reportedly significantly higher prevalence rates among general samples of individuals from jails and specific samples of homicide charged individuals as compared to the rest of the population (Teplin, 1990; Wallace et al., 1998). It is of interest to see that mental disorders are usually accompanied by substance abuse and in many cases substance, especially alcohol abuse was confounded with associations between TBI and criminal offense (Wallace et al., 1998).

Wednesday, April 27, 2022

APA decries Florida guidance calling for withholding treatment for gender non-conforming children

American Psychological Association
Press Release
Originally release 21 APR 22

Warns that Florida document is based on flawed, cherry-picked research

WASHINGTON — Following is a statement by Frank C. Worrell, PhD, president of the American Psychological Association, reacting to new guidance issued by the Florida Department of Health opposing science-based treatment for gender non-conforming children:

“This memo from the Florida Department of Health distorts the psychological science regarding the treatment of gender non-conforming children. Research into the treatment of gender non-conforming individuals has found that withholding evidence-based treatments can be psychologically damaging, especially to children and youths who are struggling with their gender identity. Rates of self-injury, suicidal ideation and suicide attempts are much higher among gender dysphoric youth, ironically attributed to stress associated with non-affirming approaches to these very real issues.   

“The Florida memo relies not on science, but on biased opinion pieces and cherry-picked findings to support a predetermined viewpoint and create a narrative that is not only scientifically inaccurate but also dangerous.  

“The American Psychological Association urges both policymakers and psychological practitioners to follow APA’s carefully researched ‘Guidelines for Psychological Practice With Transgender and Gender Nonconforming People (PDF, 461KB),’ which call for ‘culturally competent, developmentally appropriate, and trans-affirmative psychological practice’ with such individuals, including minors.

------------------
Please note: Psychologists are bound by APA's Ethical Principles of Psychologists and Code of Conduct and Practice Guidelines.

Psychologists may want to contemplate the concept of Conscientious Objector status to laws and regulations that conflict with ethical obligations and moral beliefs.

Tuesday, April 26, 2022

Ethical considerations for psychotherapists participating in Alcoholics Anonymous

Kohen, Casey B.,Conlin, William E.
Practice Innovations, Vol 7(1), Mar 2022, 40-52.

Abstract

Because the demands of professional psychology can be taxing, psychotherapists are not immune to the development of mental health and substance use disorders. One estimate indicates that roughly 30% to 40% of psychologists know of a colleague with a current substance abuse problem (Good et al., 1995). Twelve-step mutual self-help groups, particularly Alcoholics Anonymous (AA), are the most widely used form of treatment for addiction in the United States. AA has empirically demonstrated effectiveness at fostering long-term treatment success and is widely accessible throughout the world. However, psychotherapist participation in AA raises a number of ethical concerns, particularly regarding the potential for extratherapy contact with clients and the development of multiple relationships. This article attempts to review the precarious ethical and practical situations that psychotherapists, either in long-term recovery or newly sober, may find themselves in during AA involvement. Moreover, this article provides suggestions for psychotherapists in AA regarding how to best adhere to both the principles of AA (i.e., the 12 steps and 12 traditions) and the American Psychological Association’s Ethical Principles of Psychologists and Code of Conduct

Here is an excerpt:

Recent literature regarding the use of AA or other mutual self-help groups by psychotherapists is scant, but earlier studies suggest its effectiveness. A 1986 survey of 108 members of Psychologists Helping Psychologists (a seemingly defunct support group exclusively for substance dependent doctoral-level psychologists and students) shows that of the 94% of respondents maintaining abstinence, 86% attended AA (Thoreson et al., 1986). A separate study of 70 psychologists in recovery who were members of AA revealed the majority attained sobriety outside of formal treatment or intervention programs (Skorina et al., 1990). 

Because AA appears to be a vital resource for psychotherapists struggling with substance misuse, it is important to consider how to address ethical dilemmas that one might encounter while participating in AA.

Conclusion

Psychotherapists participating in AA may, at times, find that their professional responsibility of adhering to the APA Code of Ethics hinders some aspects of their categorical involvement in AA as defined by AA’s 12 steps and 12 traditions. The psychotherapist in AA may need to adjust their personal AA “program” in comparison with the typical AA member in a manner that attempts to meet the requirements of the profession yet still provides them with enough support to maintain their professional competence. This article discusses reasonable compromises, specifically tailored to the length of the psychotherapist’s sobriety, that minimize the potential for client harm. Ultimately, if the psychotherapist is unable to find an appropriate middle-ground, where the personal needs of recovery can be met without damaging client welfare and respecting the client’s rights, the psychotherapist should refer the client elsewhere. With these recommendations, psychotherapists should feel more comfortable participating in AA (or other mutual self-help groups) while also adhering to the ethical principles of our profession.

Monday, April 25, 2022

Morality just isn't Republicans' thing anymore

Steve Larkin
The Week
Originally posted 23 APR 22

Here is an excerpt:

There is no understanding the Republican Party without understanding its leader and id, former President Donald Trump. His sins and crimes have been enumerated many times. But for the record, the man is a serial adulterer who brags about committing sexual assault with impunity, responsible for three cameo appearances in Playboy videos, dishonest in his business dealings, and needlessly callow and cruel. And, finally, he claims that he has never asked God for forgiveness for any of this.

Trump's presidency would seem to have vindicated the Southern Baptist Convention's claim that "tolerance of serious wrong by leaders sears the conscience of the culture, spawns unrestrained immorality and lawlessness in the society, and surely results in God's judgment." Of course, that was about former President Bill Clinton and the Monica Lewinsky scandal. Now, tolerating this sort of behavior in a leader is par for the Republican Party course.

And Trump seems to have set a kind of example for other stars of the MAGAverse: Rep. Matt Gaetz is under investigation for paying for sex with an underage girl and sex trafficking; former Missouri Gov. Eric Greitens, who was forced to resign that post after accusations that he tried to use nude photos to blackmail a woman with whom he had an affair, has not let that stop him from running for the Senate; Rep. Madison Cawthorn has been accused of sexual harassment and other misconduct by women who were his classmates in college.

Democrats, of course, have their own fair share of scandals, criminals, and cads, and they see themselves as being on the moral side, too. But they're not running around championing those "traditional values."

Why do Republicans thrill to Trump and tolerate misbehavior which previous generations — maybe even the very same people, a few decades ago — would have viewed as immediately disqualifying? (A long time ago, Ronald Reagan being divorced and remarried was a serious problem for a small but noticeable group of voters.) Maybe it's because, while Trump is an extreme (and rich) example, in many ways he's not so different from his devotees.

Sunday, April 24, 2022

Individual vulnerability to industrial robot adoption increases support for the radical right

Anelli, M., Colantone, I., & Stanig, P. 
(2021). PNAS, 118(47), e2111611118.
https://doi.org/10.1073/pnas.2111611118

Significance

The success of radical-right parties across western Europe has generated much concern. These parties propose making borders less permeable, oppose ethnic diversity, and often express impatience with the institutions of representative democracy. Part of their recent success has been shown to be driven by structural economic changes, such as globalization, which triggers distributional consequences that, in turn, translate into voting behavior. We ask what are the political consequences of a different structural change: robotization of manufacturing. We propose a measure of individual exposure to automation and show that individuals more vulnerable to negative consequences of automation tend to display more support for the radical right. Automation exposure raises support for the radical left too, but to a significantly lower extent.

Abstract

The increasing success of populist and radical-right parties is one of the most remarkable developments in the politics of advanced democracies. We investigate the impact of industrial robot adoption on individual voting behavior in 13 western European countries between 1999 and 2015. We argue for the importance of the distributional consequences triggered by automation, which generates winners and losers also within a given geographic area. Analysis that exploits only cross-regional variation in the incidence of robot adoption might miss important facets of this process. In fact, patterns in individual indicators of economic distress and political dissatisfaction are masked in regional-level analysis, but can be clearly detected by exploiting individual-level variation. We argue that traditional measures of individual exposure to automation based on the current occupation of respondents are potentially contaminated by the consequences of automation itself, due to direct and indirect occupational displacement. We introduce a measure of individual exposure to automation that combines three elements: 1) estimates of occupational probabilities based on employment patterns prevailing in the preautomation historical labor market, 2) occupation-specific automatability scores, and 3) the pace of robot adoption in a given country and year. We find that individuals more exposed to automation tend to display higher support for the radical right. This result is robust to controlling for several other drivers of radical-right support identified by earlier literature: nativism, status threat, cultural traditionalism, and globalization. We also find evidence of significant interplay between automation and these other drivers.

Conclusion

We study the effects of robot adoption on voting behavior in western Europe. We find that higher exposure to automation increases support for radical-right parties. We argue that an individual-level analysis of vulnerability to automation is required, given the prominent role played by the distributional effects of automation unfolding within geographic areas. We also argue that measures of automation exposure based on an individual’s current occupation, as used in previous studies, are potentially problematic, due to direct and indirect displacement induced by automation. We then propose an approach that combines individual observable features with historical labor-market data. Our paper provides further evidence on the material drivers behind the increasing support for the radical right. At the same time, it takes into account the role of cultural factors and shows evidence of their interplay with automation in explaining the political realignment witnessed by advanced Western democracies.

Saturday, April 23, 2022

Historical Fundamentalism? Christian Nationalism and Ignorance About Religion in American Political History

S. L. Perry, R. Braunstein, et al.
Journal for the Scientific Study of Religion 
(2022) 61(1):21–40

Abstract

Religious right leaders often promulgate views of Christianity's historical preeminence, privilege, and persecution in the United States that are factually incorrect, suggesting credulity, ignorance, or perhaps, a form of ideologically motivated ignorance on the part of their audience. This study examines whether Christian nationalism predicts explicit misconceptions regarding religion in American political history and explores theories about the connection. Analyzing nationally representative panel data containing true/false statements about religion's place in America's founding documents, policies, and court decisions, Christian nationalism is the strongest predictor that Americans fail to affirm factually correct answers. This association is stronger among whites compared to black Americans and religiosity actually predicts selecting factually correct answers once we account for Christian nationalism. Analyses of “do not know” response patterns find more confident correct answers from Americans who reject Christian nationalism and more confident incorrect answers from Americans who embrace Christian nationalism. We theorize that, much like conservative Christians have been shown to incorrectly answer science questions that are “religiously contested,” Christian nationalism inclines Americans to affirm factually incorrect views about religion in American political history, likely through their exposure to certain disseminators of such misinformation, but also through their allegiance to a particular political-cultural narrative they wish to privilege.

From the Discussion and Conclusions

Our findings extend our understanding of contemporary culture war conflicts in the United States in several key ways. Our finding that Christian nationalist ideology is not only associated with different political or social values (Hunter 1992; Smith 2000), but belief in explicitly wrong historical claims goes beyond issues of subjective interpretation or mere differences of opinion to underscore the reality that Americans are divided by different information. Large groups of Americans hold incompatible beliefs about issues of fact, with those who ardently reject Christian nationalism more likely to confidently and correctly affirm factual claims and Christian nationalists more likely to confidently and incorrectly affirm misinformation. To be sure, we are unable to disentangle directionality here, which in all likelihood operates both ways. Christian nationalist ideology is made plausible by false or exaggerated claims about the evangelical character of the nation’s founders and founding documents. Yet Christian nationalism, as an ideology, may also foster a form of motivated ignorance or credulity toward a variety of factually incorrect statements, among them being the preeminence and growing persecution of Christianity in the UnitedStates.

Closely related to this last point, our findings extend recent research by further underscoring the powerful influence of Christian nationalism as the ideological source of credulity supporting and spreading far-right misinformation.

Friday, April 22, 2022

Generous with individuals and selfish to the masses

Alós-Ferrer, C.; García-Segarra, J.; Ritschel, A.
(2022). Nature Human Behaviour, 6(1):88-96.

Abstract

The seemingly rampant economic selfishness suggested by many recent corporate scandals is at odds with empirical results from behavioural economics, which demonstrate high levels of prosocial behaviour in bilateral interactions and low levels of dishonest behaviour. We design an experimental setting, the ‘Big Robber’ game, where a ‘robber’ can obtain a large personal gain by appropriating the earnings of a large group of ‘victims’. In a large laboratory experiment (N = 640), more than half of all robbers took as much as possible and almost nobody declined to rob. However, the same participants simultaneously displayed standard, predominantly prosocial behaviour in Dictator, Ultimatum and Trust games. Thus, we provide direct empirical evidence showing that individual selfishness in high-impact decisions affecting a large group is compatible with prosociality in bilateral low-stakes interactions. That is, human beings can simultaneously be generous with others and selfish with large groups.

From the Discussion

Our results demonstrate that socially-relevant selfishness in the large is fully compatible with evidence from experimental economics on bilateral, low-stake games at the individual level, without requiring arguments relying on population differences (in fact, we found no statistically significant differences in the behavior of participants with or without an economics background). The same individuals can behave selfishly when interacting with a large group of other people while, at the same time, displaying standard levels of prosocial behavior in commonly-used laboratory tasks where only one other individual is involved. Additionally, however, individual differences in behavior in the Big Robber Game correlate with individual selfishness in the DG/UG/TG, i.e., Extreme Robbers gave less in the DG, offered less in the UG, and transferred less in the TG than Moderate Robbers.

The finding that people behave selfishly toward a large group while being generous toward individuals suggests that harming many individuals might be easier than harming just one, in line with received evidence that people are more willing to help one individual than many. It also reflects the tradeoff between personal gain and other-regarding concerns encompassed in standard models of social preferences, although this particular implication had not been demonstrated so far. When facing a single opponent in a bilateral game, appropriating a given monetary amount can result in a large interpersonal difference. When appropriating income from a large group of people, the same personal gain involves a smaller percentual difference. Correspondingly, creating a given level of inequality with respect to others results in a much larger personal gain when income is taken from a group than when it is taken from just another person, and hence it is much more likely to offset the disutility from inequality aversion in the former case.

Thursday, April 21, 2022

Social identity switching: How effective is it?

A. K. Zinn, A. Lavrica, M. Levine, & M. Koschate
Journal of Experimental Social Psychology
Volume 101, July 2022, 104309

Abstract

Psychological theories posit that we frequently switch social identities, yet little is known about the effectiveness of such switches. Our research aims to address this gap in knowledge by determining whether – and at what level of integration into the self-concept – a social identity switch impairs the activation of the currently active identity (“identity activation cost”). Based on the task-switching paradigm used to investigate task-set control, we prompted social identity switches and measured identity salience in a laboratory study using sequences of identity-related Implicit Association Tests (IATs). Pilot 1 (N = 24) and Study 1 (N = 64) used within-subjects designs with participants completing several social identity switches. The IAT congruency effect was no less robust after identity switches compared to identity repetitions, suggesting that social identity switches were highly effective. Study 2 (N = 48) addressed potential differences for switches between identities at different levels of integration into the self. We investigated whether switches between established identities are more effective than switches from a novel to an established identity. While response times showed the predicted trend towards a smaller IAT congruency effect after switching from a novel identity, we found a trend towards the opposite pattern for error rates. The registered study (N = 144) assessed these conflicting results with sufficient power and found no significant difference in the effectiveness of switching from novel as compared to established identities. An effect of cross-categorisation in the registered study was likely due to the requirement to learn individual stimuli.

General discussion

The main aim of the current investigation was to determine the effectiveness of social identity switching. We assessed whether social identity switches lead to identity activation costs (impaired activation of the next identity) and whether social identity switches are less effective for novel than for well-established identities. The absence of an identity activation costs in our results indicates that identity switching is effective. This has important theoretical implications by lending empirical support to self-categorisation theory that states that social identity switches are “inherently variable, fluid, and context dependent” (Turner et al., 1994, p. 454).

To our knowledge, our investigation is the first approach that has employed key aspects of the task switching paradigm to learn about the process of social identity switching. The potential cost of an identity switch also has important practical implications. Like task switches, social identity switches are ubiquitous. Technological developments over the last decades have resulted in different social identities being only “a click away” from becoming salient. We can interact with (and receive) information about different social identities on a permanent basis, wherever we are - by scrolling through social media, reading news on our smartphone, receiving emails and instant messages, often in rapid succession. The literature on task switch costs has changed the way we view “multi-tasking” by providing a better understanding of its impact on task performance and task selection. Similarly, our research has important practical implications for how well people can deal with frequent and rapid social identity switches.

Wednesday, April 20, 2022

The human black-box: The illusion of understanding human better than algorithmic decision-making

Bonezzi, A., Ostinelli, M., & Melzner, J. (2022). 
Journal of Experimental Psychology: General.

Abstract

As algorithms increasingly replace human decision-makers, concerns have been voiced about the black-box nature of algorithmic decision-making. These concerns raise an apparent paradox. In many cases, human decision-makers are just as much of a black-box as the algorithms that are meant to replace them. Yet, the inscrutability of human decision-making seems to raise fewer concerns. We suggest that one of the reasons for this paradox is that people foster an illusion of understanding human better than algorithmic decision-making, when in fact, both are black-boxes. We further propose that this occurs, at least in part, because people project their own intuitive understanding of a decision-making process more onto other humans than onto algorithms, and as a result, believe that they understand human better than algorithmic decision-making, when in fact, this is merely an illusion.

General Discussion

Our work contributes to prior literature in two ways. First, it bridges two streams of research that have thus far been considered in isolation: IOED (Illusion of Explanatory Depth) (Rozenblit & Keil, 2002) and projection (Krueger,1998). IOED has mostly been documented for mechanical devices and natural phenomena and has been attributed to people confusing a superficial understanding of what something does for how it does it (Keil, 2003). Our research unveils a previously unexplored driver ofIOED, namely, the tendency to project one’s own cognitions on to others, and in so doing extends the scope of IOED to human deci-sion-making. Second, our work contributes to the literature on clinical versus statistical judgments (Meehl, 1954). Previous research shows that people tend to trust humans more than algorithms (Dietvorst et al., 2015). Among the many reasons for this phenomenon (see Grove & Meehl, 1996), one is that people do not understand how algorithms work (Yeomans et al., 2019). Our research suggests that people’s distrust toward algorithms may stem not only from alack of understanding how algorithms work but also from an illusion of understanding how their human counterparts operate.

Our work can be extended by exploring other consequences and psychological processes associated with the illusion of understand-ing humans better than algorithms. As for consequences, more research is needed to explore how illusory understanding affects trust in humans versus algorithms. Our work suggests that the illusion of understanding humans more than algorithms can yield greater trust in decisions made by humans. Yet, to the extent that such an illusion stems from a projection mechanism, it might also lead to favoring algorithms over humans, depending on the underly-ing introspections. Because people’s introspections can be fraught with biases and idiosyncrasies they might not even be aware of (Nisbett & Wilson, 1977;Wilson, 2004), people might erroneously project these same biases and idiosyncrasies more onto other humans than onto algorithms and consequently trust those humans less than algorithms. To illustrate, one might expect a recruiter to favor people of the same gender or ethnic background just because one may be inclined to do so. In these circumstances, the illusion to understand humans better than algorithms might yield greater trust in algorithmic than human decisions (Bonezzi & Ostinelli, 2021).

Tuesday, April 19, 2022

Diffusion of Punishment in Collective Norm Violations

Keshmirian, A., Hemmatian, B., et al. 
(2022, March 7). PsyArXiv
 https://doi.org/10.31234/osf.io/sjz7r

Abstract

People assign less punishment to individuals who inflict harm collectively, compared to those who do so alone. We show that this arises from judgments of diminished individual causal responsibility in the collective cases. In Experiment 1, participants (N=1002) assigned less punishment to individuals involved in collective actions leading to intentional and accidental deaths, but not failed attempts, emphasizing that harmful outcomes, but not malicious intentions, were necessary and sufficient for the diffusion of punishment. Experiments 2.a compared the diffusion of punishment for harmful actions with ‘victimless’ purity violations (e.g., eating human flesh in groups; N=752). In victimless cases, where the question of causal responsibility for harm does not arise, diffusion of collective responsibility was greatly reduced—an effect replicated in Experiment 2.b (N= 500). We propose discounting in causal attribution as the underlying cognitive mechanism for reduction in proposed punishment for collective harmful actions.

From the Discussion

Our findings also bear on theories of moral judgment. First, they support the dissociation of causal and mental-state processes in moral judgment (Cushman, 2008; Rottman & Young, 2019;  Young  et  al.,  2007,  2010).  Second, they  support  disparate judgment processes  for harmful versus "victimless" moral violations (Chakroff et al., 2013, 2017; Dungan et al., 2017; Giner-Sorolla & Chapman, 2017; Rottman & Young, 2019). Third, they reinforce the idea that punishment often involves a "backward-looking" retributive focus on responsibility, rather than a "forwards-looking" focus on rehabilitation, incapacitation, or deterrence (which, we presume, would generally favor treating solo and group actors equivalently). Punishers' future-oriented  self-serving  motives  and  their  evolutionary  roots need  further  investigation as alternative sources for  punishment  diffusion.  For  instance,  punishing  joint  violators may produce more enemies for the punisher, reducing the motivation for a severe response.

Whether the diffusion of punishment and our causal explanation for it extends to other moral domains (e.g., fairness; Graham et al., 2011)is a topic for future research. Another interesting extension is whether different causal structures produce different effects on judgments. Our vignettes  were  intentionally  ambiguous  about  causal  chains  and  whether  multiple  agents overdetermined  the  harmful  outcome.  Contrasting  diffusion  in  conjunctive  moral norm violation (when collaboration is necessary for violation) with disjunctive ones (when one individual would suffice)isinformative, since attributions of responsibility are generally higher in the former class(Gerstenberg & Lagnado, 2010; Kelley, 1973; Lagnado et al., 2013; Morris & Larrick, 1995; Shaver, 1985; Zultan et al., 2012).

Monday, April 18, 2022

The psychological drivers of misinformation belief and its resistance to correction

Ecker, U.K.H., Lewandowsky, S., Cook, J. et al. 
Nat Rev Psychol 1, 13–29 (2022).
https://doi.org/10.1038/s44159-021-00006-y

Abstract

Misinformation has been identified as a major contributor to various contentious contemporary events ranging from elections and referenda to the response to the COVID-19 pandemic. Not only can belief in misinformation lead to poor judgements and decision-making, it also exerts a lingering influence on people’s reasoning after it has been corrected — an effect known as the continued influence effect. In this Review, we describe the cognitive, social and affective factors that lead people to form or endorse misinformed views, and the psychological barriers to knowledge revision after misinformation has been corrected, including theories of continued influence. We discuss the effectiveness of both pre-emptive (‘prebunking’) and reactive (‘debunking’) interventions to reduce the effects of misinformation, as well as implications for information consumers and practitioners in various areas including journalism, public health, policymaking and education.

Summary and future directions

Psychological research has built solid foundational knowledge of how people decide what is true and false, form beliefs, process corrections, and might continue to be influenced by misinformation even after it has been corrected. However, much work remains to fully understand the psychology of misinformation.

First, in line with general trends in psychology and elsewhere, research methods in the field of misinformation should be improved. Researchers should rely less on small-scale studies conducted in the laboratory or a small number of online platforms, often on non-representative (and primarily US-based) participants. Researchers should also avoid relying on one-item questions with relatively low reliability. Given the well-known attitude–behaviour gap — that attitude change does not readily translate into behavioural effects — researchers should also attempt to use more behavioural measures, such as information-sharing measures, rather than relying exclusively on self-report questionnaires. Although existing research has yielded valuable insights into how people generally process misinformation (many of which will translate across different contexts and cultures), an increased focus on diversification of samples and more robust methods is likely to provide a better appreciation of important contextual factors and nuanced cultural differences.

Sunday, April 17, 2022

Leveraging artificial intelligence to improve people’s planning strategies

F. Callaway, et al.
PNAS, 2022, 119 (12) e2117432119 

Abstract

Human decision making is plagued by systematic errors that can have devastating consequences. Previous research has found that such errors can be partly prevented by teaching people decision strategies that would allow them to make better choices in specific situations. Three bottlenecks of this approach are our limited knowledge of effective decision strategies, the limited transfer of learning beyond the trained task, and the challenge of efficiently teaching good decision strategies to a large number of people. We introduce a general approach to solving these problems that leverages artificial intelligence to discover and teach optimal decision strategies. As a proof of concept, we developed an intelligent tutor that teaches people the automatically discovered optimal heuristic for environments where immediate rewards do not predict long-term outcomes. We found that practice with our intelligent tutor was more effective than conventional approaches to improving human decision making. The benefits of training with our cognitive tutor transferred to a more challenging task and were retained over time. Our general approach to improving human decision making by developing intelligent tutors also proved successful for another environment with a very different reward structure. These findings suggest that leveraging artificial intelligence to discover and teach optimal cognitive strategies is a promising approach to improving human judgment and decision making.

Significance

Many bad decisions and their devastating consequences could be avoided if people used optimal decision strategies. Here, we introduce a principled computational approach to improving human decision making. The basic idea is to give people feedback on how they reach their decisions. We develop a method that leverages artificial intelligence to generate this feedback in such a way that people quickly discover the best possible decision strategies. Our empirical findings suggest that a principled computational approach leads to improvements in decision-making competence that transfer to more difficult decisions in more complex environments. In the long run, this line of work might lead to apps that teach people clever strategies for decision making, reasoning, goal setting, planning, and goal achievement.

From the Discussion

We developed an intelligent system that automatically discovers optimal decision strategies and teaches them to people by giving them metacognitive feedback while they are deciding what to do. The general approach starts from modeling the kinds of decision problems people face in the real world along with the constraints under which those decisions have to be made. The resulting formal model makes it possible to leverage artificial intelligence to derive an optimal decision strategy. To teach people this strategy, we then create a simulated decision environment in which people can safely and rapidly practice making those choices while an intelligent tutor provides immediate, precise, and accurate feedback on how they are making their decision. As described above, this feedback is designed to promote metacognitive reinforcement learning.

Saturday, April 16, 2022

Morality, punishment, and revealing other people’s secrets.

Salerno, J. M., & Slepian, M. L. (2022).
Journal of Personality & Social Psychology, 
122(4), 606–633. 
https://doi.org/10.1037/pspa0000284

Abstract

Nine studies represent the first investigation into when and why people reveal other people’s secrets. Although people keep their own immoral secrets to avoid being punished, we propose that people will be motivated to reveal others’ secrets to punish them for immoral acts. Experimental and correlational methods converge on the finding that people are more likely to reveal secrets that violate their own moral values. Participants were more willing to reveal immoral secrets as a form of punishment, and this was explained by feelings of moral outrage. Using hypothetical scenarios (Studies 1, 3–6), two controversial events in the news (hackers leaking citizens’ private information; Study 2a–2b), and participants’ behavioral choices to keep or reveal thousands of diverse secrets that they learned in their everyday lives (Studies 7–8), we present the first glimpse into when, how often, and one explanation for why people reveal others’ secrets. We found that theories of self-disclosure do not generalize to others’ secrets: Across diverse methodologies, including real decisions to reveal others’ secrets in everyday life, people reveal others’ secrets as punishment in response to moral outrage elicited from others’ secrets.

From the Discussion

Our data serve as a warning flag: one should be aware of a potential confidant’s views with regard to the morality of the behavior. Across 14 studies (Studies 1–8; Supplemental Studies S1–S5), we found that people are more likely to reveal other people’s secrets to the degree that they, personally, view the secret act as immoral. Emotional reactions to the immoral secrets explained this effect, such as moral outrage as well as anger and disgust, which were associated correlationally and experimentally with revealing the secret as a form of punishment. People were significantly more likely to reveal the same secret if the behavior was done intentionally (vs. unintentionally), if it had gone unpunished (vs. already punished by someone else), and in the context of a moral framing (vs. no moral framing). These experiments suggest a causal role for both the degree to which the secret behavior is immoral and the participants’ desire to see the behavior punished.  Additionally, we found that this psychological process did not generalize to non-secret information. Although people were more likely to reveal both secret and non-secret information when they perceived it to be more immoral, they did so for different reasons: as an appropriate punishment for the immoral secrets, and as interesting fodder for gossip for the immoral non-secrets.

Friday, April 15, 2022

Strategic identity signaling in heterogeneous networks

T. Van der dos, M. Galesic, et al.
PNAS, 2022.
119 (10) e2117898119

Abstract

Individuals often signal identity information to facilitate assortment with partners who are likely to share norms, values, and goals. However, individuals may also be incentivized to encrypt their identity signals to avoid detection by dissimilar receivers, particularly when such detection is costly. Using mathematical modeling, this idea has previously been formalized into a theory of covert signaling. In this paper, we provide an empirical test of the theory of covert signaling in the context of political identity signaling surrounding the 2020 US presidential elections. To identify likely covert and overt signals on Twitter, we use methods relying on differences in detection between ingroup and outgroup receivers. We strengthen our experimental predictions with additional mathematical modeling and examine the usage of selected covert and overt tweets in a behavioral experiment. We find that participants strategically adjust their signaling behavior in response to the political constitution of their audiences. These results support our predictions and point to opportunities for further theoretical development. Our findings have implications for our understanding of political communication, social identity, pragmatics, hate speech, and the maintenance of cooperation in diverse populations.

Significance

Much of online conversation today consists of signaling one’s political identity. Although many signals are obvious to everyone, others are covert, recognizable to one’s ingroup while obscured from the outgroup. This type of covert identity signaling is critical for collaborations in a diverse society, but measuring covert signals has been difficult, slowing down theoretical development. We develop a method to detect covert and overt signals in tweets posted before the 2020 US presidential election and use a behavioral experiment to test predictions of a mathematical theory of covert signaling. Our results show that covert political signaling is more common when the perceived audience is politically diverse and open doors to a better understanding of communication in politically polarized societies.

From the Discussion

The theory predicts that individuals should use more covert signaling in more heterogeneous groups or when they are in the minority. We found support for this prediction in the ways people shared political speech in a behavioral experiment. We observed the highest levels of covert signaling when audiences consisted almost entirely of cross-partisans, supporting the notion that covert signaling is a strategy for avoiding detection by hostile outgroup members. Of note, we selected tweets for our study at a time of heightened partisan divisions: the four weeks preceding the 2020 US presidential election. Consequently, these tweets mostly discussed the opposing political party. This focus was reflected in our behavioral experiment, in which we did not observe an effect of audience composition when all members were (more or less extreme) copartisans. In that societal context, participants might have perceived the cost of dislikes to be minimal and have likely focused on partisan disputes in their real-life conversations happening around that time. Future work testing the theory of covert signaling should also examine signaling strategies in copartisan conversations during times of salient intragroup political divisions.


Editor's Note: Wondering if this research generalizes into other covert forms of communication during psychotherapy.

Thursday, April 14, 2022

AI won’t steal your job, just make it meaningless

John Danaher
iainews.com
Originally published 18 MAR 22

New technologies are often said to be in danger of making humans redundant, replacing them with robots and AI, and making work disappear altogether. A crisis of identity and purpose might result from that, but Silicon Valley tycoons assure us that a universal basic income could at least take care of people’s material needs, leaving them with plenty of leisure time in which to forge new identities and find new sources of purpose.

This, however, paints an overly simplistic picture. What seems more likely to happen is that new technologies will not make humans redundant at a mass scale, but will change the nature of work, making it worse for many, and sapping the elements that give it some meaning and purpose. It’s the worst of both worlds: we’ll continue working, but our jobs will become increasingly meaningless. 

History has some lessons to teach us here. Technology has had a profound effect on work in the past, not just on the ways in which we carry out our day-to-day labour, but also on how we understand its value. Consider the humble plough. In its most basic form, it is a hand-operated tool, consisting of little more than a pointed stick that scratches a furrow through the soil. This helps a farmer to sow seeds but does little else. Starting in the middle ages, however, more complex, ‘heavy’ ploughs began to be used by farmers in Northern Europe. These heavy ploughs rotated and turned the earth, bringing nutrient rich soils to the surface, and radically altering the productivity of farming. Farming ceased being solely about subsistence. It started to be about generating wealth.

The argument about how the heavy plough transformed the nature of work was advanced by historian Lynn White Jr in his classic study Medieval Technology and Social Change. Writing in the idiom of the early 1960s, he argued that “No more fundamental change in the idea of man’s relation to the soil can be imagined: once man had been part of nature; now he became her exploiter.”

It is easy to trace a line – albeit one that takes a detour through Renaissance mercantilism and the Industrial revolution – from the development of the heavy plough to our modern conception of work. Although work is still an economic necessity for many people, it is not just that. It is something more. We don’t just do it to survive; we do it to thrive. Through our work we can buy into a certain lifestyle and affirm a certain identity. We can develop mastery and cultivate self-esteem; we make a contribution to our societies and a name for ourselves. 

Wednesday, April 13, 2022

Moralization of rationality can stimulate, but intellectual humility inhibits, sharing of hostile conspiratorial rumors.

Marie, A., & Petersen, M. (2022, March 4). 
https://doi.org/10.31219/osf.io/k7u68

Abstract

Many assume that if citizens become more inclined to moralize the values of evidence-based and logical thinking, political hostility and conspiracy theories would be less widespread.  Across two large surveys (N = 3675) run in the U.S.A. of 2021 (one exploratory and one preregistered), we provide the first demonstration that moralization of rationality can actually stimulate the spread of conspiratorial and hostile news. This reflects the fact that the moralization of rationality can be highly interrelated with status seeking, corroborating arguments that self-enhancing strategies often advance hidden behind claims to objectivity and morality. In contrast to moral grandstanding on the issue of rationality, our studies find robust evidence that intellectual humility may immunize people from sharing and believing hostile  conspiratorial news (i.e. the awareness that intuitions are fallible, and that suspending critique is often desirable). All associations generalized to hostile conspiratorial news both “fake” and anchored in real events.

General Discussion

Many observers assume that citizens more morally sensitized to the values of evidence-based and methodic thinking would be better protected from the perils of political polarization, conspiracy theories, and “fake news.”Yet, attention to the discourse of individuals who pass along politically hostile and conspiratorial claims suggests that they often sincerely believe to be free and independent “critical thinkers”, and to care more about“facts” than the “unthinking sheep” to which they assimilate most of the population (Harambam & Aupers, 2017).

Across two  large online  surveys (N  = 3675) conducted in  the  context  of the  highly polarized  U.S.A. of  2021, we provide the first piece of evidence that moralizing  epistemic rationality—a motivation   for   rationality defined in the abstract—may stimulate the dissemination of hostile conspiratorial views. Specifically, respondents who reported viewing the grounding of one’s beliefs in evidence and logic as amoral virtue(Ståhl et al., 2016) were more  likely to share hostile conspiratorial news to their political  opponents on social  media than individuals low on  this trait.  Importantly, the effect generalized to two types of news stories overtly targeting the  participant’s outgroup: (false) news making entirely fabricated.

Tuesday, April 12, 2022

The Affective Harm Account (AHA) of Moral Judgment: Reconciling Cognition and Affect, Dyadic Morality and Disgust, Harm and Purity

Kurt Gray, Jennifer K. MacCormack, et al.
In Press (2022)
Journal of Personality and Social Psychology

Abstract

Moral psychology has long debated whether moral judgment is rooted in harm vs. affect. We reconcile this debate with the Affective Harm Account (AHA) of moral judgment. The AHA understands harm as an intuitive perception (i.e., perceived harm), and divides “affect” into two: embodied visceral arousal (i.e., gut feelings) and stimulus-directed affective appraisals (e.g., ratings of disgustingness). The AHA was tested in a randomized, double-blind pharmacological experiment with healthy young adults judging the immorality, harmfulness, and disgustingness of everyday moral scenarios (e.g., lying) and unusual purity scenarios (e.g., sex with a corpse) after receiving either a placebo or the beta-blocker propranolol (a drug that dampens visceral arousal). Results confirmed the three key hypotheses of the AHA. First, perceived harm and affective appraisals are neither competing nor independent but intertwined. Second, although
both perceived harm and affective appraisals predict moral judgment, perceived harm is consistently relevant across all scenarios (in line with the Theory of Dyadic Morality), whereas affective appraisals are especially relevant in unusual purity scenarios (in line with affect-as-information theory). Third, the “gut feelings” of visceral arousal are not as important to morality as often believed. Dampening visceral arousal (via propranolol) did not directly impact moral judgment, but instead changed the relative contribution of affective appraisals to moral judgment—and only in unusual purity scenarios. By embracing a constructionist view of the mind that blurs traditional dichotomies, the AHA reconciles historic harm-centric and current affect-centric theories, parsimoniously explaining judgment differences across various moral scenarios without requiring any “moral foundations.”

Discussion

Moral psychology has long debated whether moral judgment is grounded in affect or harm. Seeking to reconcile these apparently competing perspectives, we have proposed an Affective Harm Account (AHA) of moral judgment. This account is conciliatory because it highlights the importance of both perceived harm and affect, not as competing considerations but as joint partners—two different horses yoked together pulling the cart of moral judgment.

The AHA also adds clarity to the previously murky nature of “affect” in moral psychology, differentiating it both in nature and measurement as (at least) two phenomena—embodied, free-floating, visceral arousal (i.e., “gut feelings”) and self-reported, context-bound, affective appraisals (i.e., “this situation is gross”). The importance of affect in moral judgment—especially the “gut feelings” of visceral arousal—was tested via administration of propranolol, which dampens visceral arousal via beta-adrenergic receptor blockade. Importantly, propranolol allows us to manipulate more general visceral arousal (rather than targeting a specific organ, like the gut, or a specific state, like nausea). This increases the potential generalizability of these findings to other moral scenarios (beyond disgust) where visceral arousal might be relevant. We measured the effect of propranolol (vs. placebo) on ratings of moral condemnation, perceived harm, and affective appraisals (i.e., operationalized as ratings of disgust, as in much past work). These ratings were obtained for both everyday moral scenarios (Hofmann et al., 2018)—which are dyadic in structure and thus obviously linked to harm—and for unusual purity scenarios, which are frequently linked to affective appraisals of disgust (Horberg et al., 2009). This study offers support for the three hypotheses of the AHA.

Monday, April 11, 2022

Distinct neurocomputational mechanisms support informational and socially normative conformity

Mahmoodi A, Nili H, et al.
(2022) PLoS Biol 20(3): e3001565. 
https://doi.org/10.1371/journal.pbio.3001565

Abstract

A change of mind in response to social influence could be driven by informational conformity to increase accuracy, or by normative conformity to comply with social norms such as reciprocity. Disentangling the behavioural, cognitive, and neurobiological underpinnings of informational and normative conformity have proven elusive. Here, participants underwent fMRI while performing a perceptual task that involved both advice-taking and advice-giving to human and computer partners. The concurrent inclusion of 2 different social roles and 2 different social partners revealed distinct behavioural and neural markers for informational and normative conformity. Dorsal anterior cingulate cortex (dACC) BOLD response tracked informational conformity towards both human and computer but tracked normative conformity only when interacting with humans. A network of brain areas (dorsomedial prefrontal cortex (dmPFC) and temporoparietal junction (TPJ)) that tracked normative conformity increased their functional coupling with the dACC when interacting with humans. These findings enable differentiating the neural mechanisms by which different types of conformity shape social changes of mind.

Discussion

A key feature of adaptive behavioural control is our ability to change our mind as new evidence comes to light. Previous research has identified dACC as a neural substrate for changes of mind in both nonsocial situations, such as when receiving additional evidence pertaining to a previously made decision, and social situations, such as when weighing up one’s own decision against the recommendation of an advisor. However, unlike the nonsocial case, the role of dACC in social changes of mind can be driven by different, and often competing, factors that are specific to the social nature of the interaction. In particular, a social change of mind may be driven by a motivation to be correct, i.e., informational influence. Alternatively, a social change of mind may be driven by reasons unrelated to accuracy—such as social acceptance—a process called normative influence. To date, studies on the neural basis of social changes of mind have not disentangled these processes. It has therefore been unclear how the brain tracks and combines informational and normative factors.

Here, we leveraged a recently developed experimental framework that separates humans’ trial-by-trial conformity into informational and normative components to unpack the neural basis of social changes of mind. On each trial, participants first made a perceptual estimate and reported their confidence in it. In support of our task rationale, we found that, while participants’ changes of mind were affected by confidence (i.e., informational) in both human and computer settings, they were only affected by the need to reciprocate influence (i.e., normative) specifically in the human–human setting. It should be noted that participants’ perception of their partners’ accuracy is also an important factor in social change of mind (we tend to change our mind towards the more accurate participants). 

Sunday, April 10, 2022

The habituation fallacy: Disaster victims who are repeatedly victimized are assumed to suffer less, and they are helped less

Hanna Zagefka
European Journal of Social Psychology
First published: 09 February 2022

Abstract

This paper tests the effects of lay beliefs that disaster victims who have been victimized by other events in the past will cope better with a new adverse event than first-time victims. It is shown that believing that disaster victims can get habituated to suffering reduces helping intentions towards victims of repeated adversity, because repeatedly victimized victims are perceived to be less traumatized by a new adverse event. In other words, those who buy into habituation beliefs will impute less trauma and suffering to repeated victims compared to first-time victims, and they will therefore feel less inclined to help those repeatedly victimized victims. This was demonstrated in a series of six studies, two of which were preregistered (total N = 1,010). Studies 1, 2 and 3 showed that beliefs that disaster victims become habituated to pain do indeed exist among lay people. Such beliefs are factually inaccurate, because repeated exposure to severe adversity makes it harder, not easier, for disaster victims to cope with a new negative event. Therefore, we call this belief the ‘habituation fallacy’. Studies 2, 3 and 4 demonstrated an indirect negative effect of a belief in the ‘habituation fallacy’ on ‘helping intentions’, via lesser ‘trauma’ ascribed to victims who had previously been victimized. Studies 5 and 6 demonstrated that a belief in the ‘habituation fallacy’ causally affects trauma ascribed to, and helping intentions towards, repeatedly victimized victims, but not first-time victims. The habituation fallacy can potentially explain reluctance to donate to humanitarian causes in those geographical areas that frequently fall prey to disasters.

From the General Discussion

Taken together, these studies show a tendency to believe in the habituation fallacy. That is, they might believe that victims who have previously suffered are less affected by new adversity than victims who are first-time sufferers. Buy-in to the habituation fallacy means that victims of repeated adversity are assumed to suffer less, and that they are consequently helped less. Consistent evidence for this was found across
six studies, two of which were preregistered.

These results are important and add to the extant literature in significant ways.  Many factors have been discussed as driving disaster giving (see e.g., Albayrak, Aydemir, & Gleibs, 2021; Bekkers & Wiepking, 2011; Berman et al., 2018; Bloom, 2017; Cuddy et al., 2007; Dickert et al., 2011; Evangelidis & Van den Bergh, 2013; Hsee et al., 2013; Kogut, 2011; Kogut et al., 2015; van Leeuwen & Täuber, 2012; Zagefka & James, 2015).  Significant perceived suffering caused by an event is clearly a powerful factor that propels donors into action. However, although lay beliefs about disasters have been studied, lay beliefs about suffering by the victims have been neglected so far. Moreover, although clearly some areas of the world are visited more frequently by disasters than others, the potential effects of this on helping decisions have not previously been studied.

The present paper therefore addresses an important gap, by linking lay beliefs about disasters to both perceived previous victimization and perceived suffering of the victims.  Clearly, helping decisions are driven by emotional and often biased factors (Bloom, 2017), and this contribution sheds light on an important mechanism that is both affective and potentially biased in nature, thereby advancing our understanding of donor motivations (Chapman et al., 2020). 

Saturday, April 9, 2022

Deciding to be authentic: Intuition is favored over deliberation when authenticity matters

K. Oktar & T. Lombrozo
Cognition
Volume 223, June 2022, 105021

Abstract

Deliberative analysis enables us to weigh features, simulate futures, and arrive at good, tractable decisions. So why do we so often eschew deliberation, and instead rely on more intuitive, gut responses? We propose that intuition might be prescribed for some decisions because people's folk theory of decision-making accords a special role to authenticity, which is associated with intuitive choice. Five pre-registered experiments find evidence in favor of this claim. In Experiment 1 (N = 654), we show that participants prescribe intuition and deliberation as a basis for decisions differentially across domains, and that these prescriptions predict reported choice. In Experiment 2 (N = 555), we find that choosing intuitively vs. deliberately leads to different inferences concerning the decision-maker's commitment and authenticity—with only inferences about the decision-maker's authenticity showing variation across domains that matches that observed for the prescription of intuition in Experiment 1. In Experiment 3 (N = 631), we replicate our prior results and rule out plausible confounds. Finally, in Experiment 4 (N = 177) and Experiment 5 (N = 526), we find that an experimental manipulation of the importance of authenticity affects the prescribed role for intuition as well as the endorsement of expert human or algorithmic advice. These effects hold beyond previously recognized influences on intuitive vs. deliberative choice, such as computational costs, presumed reliability, objectivity, complexity, and expertise.

From the Discussion section

Our theory and results are broadly consistent with prior work on cross-domain variation in processing preferences (e.g., Inbar et al., 2010), as well as work showing that people draw social inferences from intuitive decisions (e.g., Tetlock, 2003). However, we bridge and extend these literatures by relating inferences made on the basis of an individual's decision to cross-domain variation in the prescribed roles of intuition and deliberation. Importantly, our work is unique in showing that neither judgments about how decisions ought to be made, nor inferences from decisions, are fully reducible to considerations of differential processing costs or the reliability of a given process for the case at hand. Our stimuli—unlike those used in prior work (e.g., Inbar et al., 2010; Pachur & Spaar, 2015)—involved deliberation costs that had already been incurred at the time of decision, yet participants nevertheless displayed substantial and systematic cross-domain variation in their inferences, processing judgments, and eventual decisions. Most dramatically, our matched-information scenarios in Experiment 3 ensured that effects were driven by decision basis alone. In addition to excluding the computational costs of deliberation and matching the decision to deliberate, these scenarios also matched the evidence available concerning the quality of each choice. Nonetheless, decisions that were based on intuition vs. deliberation were judged differently along a number of dimensions, including their authenticity.

Friday, April 8, 2022

What predicts suicidality among psychologists? An examination of risk and resilience

S. Zuckerman, O. R. Lightsey Jr. & J. White
Death Studies (2022)
DOI: 10.1080/07481187.2022.2042753

Abstract

Psychologists may have a uniquely high risk for suicide. We examined whether, among 172 psychologists, factors predicting suicide risk among the general population (e.g., gender and mental illness), occupational factors (e.g., burnout and secondary traumatic stress), and past trauma predicted suicidality. We also tested whether resilience and meaning in life were negatively related to suicidality and whether resilience buffered relationships between risk factors and suicidality. Family history of mental illness, number of traumas, and lifetime depression/anxiety predicted higher suicidality, whereas resilience predicted lower suicidality. At higher levels of resilience, the relationship between family history of suicide and suicidality was stronger.

From the Discussion section:

Contrary to hypotheses, however, resilience did not consistently buffer the relationship between vulnerability factors and suicidality. Indeed, resilience appeared to strengthen the relationships between having a family history of suicide and suicidality. It is plausible that psychologists may overestimate their resilience or believe that they “should” be resilient given their training or their helping role (paralleling burnout-related themes identified in the culture of medicine, “show no weakness” and “patients come first;” see Williams et al., 2020, p. 820). Similarly, persons who believe that they are generally resilient may be demoralized by their inability to prevent family history of suicide from negatively affecting them, and this demoralization may result in family history of suicide being a particularly strong predictor among these individuals. Alternatively, this result could stem from the BRS, which may not measure components of resilience that protect against suicidality, or it could be an artifact of small sample size and low power for detecting moderation (Frazier et al., 2004). Of course, interaction terms are symmetric, and the resilience x family history of suicide interaction can also be interpreted to mean that family history of suicide strengthens the relationship between resilience and suicidality: When there is a family history of suicide, resilience has a positive relationship with suicidality whereas, when there is no family history of suicide, resilience has a negative relationship with suicidality.

Thursday, April 7, 2022

How to Prevent Robotic Sociopaths: A Neuroscience Approach to Artificial Ethics

Christov-Moore, L., Reggente, N.,  et al.
https://doi.org/10.31234/osf.io/6tn42

Abstract

Artificial intelligence (AI) is expanding into every niche of human life, organizing our activity, expanding our agency and interacting with us to an increasing extent. At the same time, AI’s efficiency, complexity and refinement are growing quickly. Justifiably, there is increasing concern with the immediate problem of engineering AI that is aligned with human interests.

Computational approaches to the alignment problem attempt to design AI systems to parameterize human values like harm and flourishing, and avoid overly drastic solutions, even if these are seemingly optimal. In parallel, ongoing work in service AI (caregiving, consumer care, etc.) is concerned with developing artificial empathy, teaching AI’s to decode human feelings and behavior, and evince appropriate, empathetic responses. This could be equated to cognitive empathy in humans.

We propose that in the absence of affective empathy (which allows us to share in the states of others), existing approaches to artificial empathy may fail to produce the caring, prosocial component of empathy, potentially resulting in superintelligent, sociopath-like AI. We adopt the colloquial usage of “sociopath” to signify an intelligence possessing cognitive empathy (i.e., the ability to infer and model the internal states of others), but crucially lacking harm aversion and empathic concern arising from vulnerability, embodiment, and affective empathy (which permits for shared experience). An expanding, ubiquitous intelligence that does not have a means to care about us poses a species-level risk.

It is widely acknowledged that harm aversion is a foundation of moral behavior. However, harm aversion is itself predicated on the experience of harm, within the context of the preservation of physical integrity. Following from this, we argue that a “top-down” rule-based approach to achieving caring, aligned AI may be unable to anticipate and adapt to the inevitable novel moral/logistical dilemmas faced by an expanding AI. It may be more effective to cultivate prosociality from the bottom up, baked into an embodied, vulnerable artificial intelligence with an incentive to preserve its real or simulated physical integrity. This may be achieved via optimization for incentives and contingencies inspired by the development of empathic concern in vivo. We outline the broad prerequisites of this approach and review ongoing work that is consistent with our rationale.

If successful, work of this kind could allow for AI that surpasses empathic fatigue and the idiosyncrasies, biases, and computational limits of human empathy. The scaleable complexity of AI may allow it unprecedented capability to deal proportionately and compassionately with complex, large-scale ethical dilemmas. By addressing this problem seriously in the early stages of AI’s integration with society, we might eventually produce an AI that plans and behaves with an ingrained regard for the welfare of others, aided by the scalable cognitive complexity necessary to model and solve extraordinary problems.