Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, March 31, 2021

Psychedelic Moral Enhancement

Earp, B. (2018).
Royal Institute of 
Philosophy Supplement, 83, 415-439. 
doi:10.1017/S1358246118000474

Abstract

The moral enhancement (or bioenhancement) debate seems stuck in a dilemma. On the one hand, the more radical proposals, while certainly novel and interesting, seem unlikely to be feasible in practice, or if technically feasible then most likely imprudent. But on the other hand, the more sensible proposals – sensible in the sense of being both practically achievable and more plausibly ethically justifiable – can be rather hard to distinguish from both traditional forms of moral enhancement, such as non-drug-mediated social or moral education, and non-moral forms of bioenhancement, such as smart-drug style cognitive enhancement. In this essay, I argue that bioethicists have paid insufficient attention to an alternative form of moral bioenhancement – or at least a likely candidate – that falls somewhere between these two extremes, namely the (appropriately qualified) use of certain psychedelic drugs.

 Conclusion

I would like to conclude with a note of caution. Because I have been interested to explore the potentially positive role of psychedelics in moral self-development, I have primarily focused on “successful” anecdotes—that is, cases in which people seem genuinely to have benefitted, morally or otherwise, from their drug-enhanced experiences. But more negative experiences are certainly possible, as mentioned earlier. As the prominent drug researcher Ben Sessa argues, we are right to adopt a stance of healthy skepticism toward any proposal that, “in the eyes of the general public, is associated with recreational drug abuse.”

Indeed, psychedelic drugs—just like other drugs such as alcohol or prescription medication—can, when used irresponsibly, cause “physical, psychological and social harm, and even deaths.” So we must be cautious, and take seriously the concerns of those people who fear that the use of such drugs may cause “greater social and health problems than it may solve.” Even so, Sessa suggests that there is more than enough evidence already from recent, controlled studies to render plausible the folk knowledge—accumulated over centuries—that psychedelics can also be beneficial. At a minimum, he concludes, the “evidence against at least researching” psychedelics for therapeutic or enhancement purposes “appears to be very scant indeed.”

Tuesday, March 30, 2021

On Dual- and Single-Process Models of Thinking

De Neys W. On 
Perspectives on Psychological Science. 
February 2021. 
doi:10.1177/1745691620964172

Abstract

Popular dual-process models of thinking have long conceived intuition and deliberation as two qualitatively different processes. Single-process-model proponents claim that the difference is a matter of degree and not of kind. Psychologists have been debating the dual-process/single-process question for at least 30 years. In the present article, I argue that it is time to leave the debate behind. I present a critical evaluation of the key arguments and critiques and show that—contra both dual- and single-model proponents—there is currently no good evidence that allows one to decide the debate. Moreover, I clarify that even if the debate were to be solved, it would be irrelevant for psychologists because it does not advance the understanding of the processing mechanisms underlying human thinking.

Time to Move On

The dual vs single process model debate has not been resolved, it can be questioned whether the debate
can be resolved, and even if it were to be resolved, it will not inform our theory development about the critical processing mechanism underlying human thinking. This implies that the debate is irrelevant for the empirical study of thinking. In a sense the choice between a single and dual process model boils—quite literally—down to a choice between two different religions. Scholars can (and may) have different personal beliefs and preferences as to which model serves their conceptualizing and communicative goals best. However, what they cannot do is claim there are good empirical or theoretical scientific arguments to favor one over the other.

I do not contest that the single vs dual process model debate might have been useful in the past. For example, the relentless critique of single process proponents helped to discard the erroneous perfect feature alignment view. Likewise, the work of Evans and Stanovich in trying to pinpoint defining features was helpful to start sketching the descriptive building blocks of the mental simulation and cognitive decoupling process. Hence, I do believe that the debate has had some positive by-products. 

Monday, March 29, 2021

The problem with prediction

Joseph Fridman
aeon.com
Originally published 25 Jan 21

Here is an excerpt:

Today, many neuroscientists exploring the predictive brain deploy contemporary economics as a similar sort of explanatory heuristic. Scientists have come a long way in understanding how ‘spending metabolic money to build complex brains pays dividends in the search for adaptive success’, remarks the philosopher Andy Clark, in a notable review of the predictive brain. The idea of the predictive brain makes sense because it is profitable, metabolically speaking. Similarly, the psychologist Lisa Feldman Barrett describes the primary role of the predictive brain as managing a ‘body budget’. In this view, she says, ‘your brain is kind of like the financial sector of a company’, predictively allocating resources, spending energy, speculating, and seeking returns on its investments. For Barrett and her colleagues, stress is like a ‘deficit’ or ‘withdrawal’ from the body budget, while depression is bankruptcy. In Blackmore’s day, the brain was made up of sentries and soldiers, whose collective melancholy became the sadness of the human being they inhabited. Today, instead of soldiers, we imagine the brain as composed of predictive statisticians, whose errors become our neuroses. As the neuroscientist Karl Friston said: ‘[I]f the brain is an inference machine, an organ of statistics, then when it goes wrong, it’ll make the same sorts of mistakes a statistician will make.’

The strength of this association between predictive economics and brain sciences matters, because – if we aren’t careful – it can encourage us to reduce our fellow humans to mere pieces of machinery. Our brains were never computer processors, as useful as it might have been to imagine them that way every now and then. Nor are they literally prediction engines now and, should it come to pass, they will not be quantum computers. Our bodies aren’t empires that shuttle around sentrymen, nor are they corporations that need to make good on their investments. We aren’t fundamentally consumers to be tricked, enemies to be tracked, or subjects to be predicted and controlled. Whether the arena be scientific research or corporate intelligence, it becomes all too easy for us to slip into adversarial and exploitative framings of the human; as Galison wrote, ‘the associations of cybernetics (and the cyborg) with weapons, oppositional tactics, and the black-box conception of human nature do not so simply melt away.’

Sunday, March 28, 2021

Negativity Spreads More than Positivity on Twitter after both Positive and Negative Political Situations

Schöne, J., Parkinson, B., & Goldenberg, A. 
(2021, January 2). 
https://doi.org/10.31234/osf.io/x9e7u

Abstract

What type of emotional language spreads further in political discourses on social media? Previous research has focused on situations that primarily elicited negative emotions, showing that negative language tended to spread further. The current project addressed the gap introduced when looking only at negative situations by comparing the spread of emotional language in response to both predominantly positive and negative political situations. In Study 1, we examined the spread of emotional language among tweets related to the winning and losing parties in the 2016 US elections, finding that increased negativity (but not positivity) predicted content sharing in both situations. In Study 2, we compared the spread of emotional language in two separate situations: the celebration of the US Supreme Court approval of same-sex marriage (positive), and the Ferguson Unrest (negative), finding again that negativity spread further. These results shed light on the nature of political discourse and engagement.

General Discussion

The goal of the project was to investigate what types of emotional language spread further in response to negative and positive political situations. In Studies 1 (same situation) and 2 (separate situations),we examined the spread of emotional language in response to negative and positive situations. Results from both of our studies suggested that negative language tended to spread further both in negative and positive situations. Analysis of political affiliation in both studies indicated that the users that produced the negative language in the political celebrations were ingroup members (conservatives in Study 1 and liberals in Study 2). Analysis of negative content produced in celebrations shows that negative language was mainly used to describe hardships or past obstacles. Combined, these two studies shed light on the nature of political engagement online. 

Saturday, March 27, 2021

Veil-of-ignorance reasoning mitigates self-serving bias in resource allocation during the COVID-19 crisis

Huang, K. et al.
Judgment and Decision Making
Vol. 16, No. 1, pp 1-19.

Abstract

The COVID-19 crisis has forced healthcare professionals to make tragic decisions concerning which patients to save. Furthermore, The COVID-19 crisis has foregrounded the influence of self-serving bias in debates on how to allocate scarce resources. A utilitarian principle favors allocating scarce resources such as ventilators toward younger patients, as this is expected to save more years of life. Some view this as ageist, instead favoring age-neutral principles, such as “first come, first served”. Which approach is fairer? The “veil of ignorance” is a moral reasoning device designed to promote impartial decision-making by reducing decision-makers’ use of potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here we apply veil-of-ignorance reasoning to the COVID-19 ventilator dilemma, asking participants which policy they would prefer if they did not know whether they are younger or older. Two studies (pre-registered; online samples; Study 1, N=414; Study 2 replication, N=1,276) show that veil-of-ignorance reasoning shifts preferences toward saving younger patients. The effect on older participants is dramatic, reversing their opposition toward favoring the young, thereby eliminating self-serving bias. These findings provide guidance on how to remove self-serving biases to healthcare policymakers and frontline personnel charged with allocating scarce medical resources during times of crisis.

Friday, March 26, 2021

Feeling authentic serves as a buffer against rejection

Gino, F. and Kouchaki, M.
Organizational Behavior and Human Decision Processes
Volume 160, September 2020, Pages 36-50

Abstract

Social exclusion is a painful yet common experience in many people’s personal and professional lives. This research demonstrates that feeling authentic serves as a buffer against social rejection, leading people to experience less social pain. Across five studies, using different manipulations of authenticity, different paradigms to create social exclusion, and different measures of feeling rejected, we found that experiencing authenticity led participants to appraise situations as less threatening and to experience lower feelings of rejection from the social exclusion. We also found that perceived threat explains these effects. Our findings suggest that authenticity may be an underused resource for people who perceive themselves to be, or actually are, socially excluded or ostracized. This research has diverse and important implications: Interventions that increase authenticity could be used to reduce perceptions of threatening situations and the pain of impending exclusion episodes in situations ranging from adjustment to college to organizational orientation programs.

Highlights

• Feeling authentic serves as a buffer against social rejection.

• Feeling authentic results in lower feelings of rejections after social exclusion.

• Experiencing authenticity leads people to appraise situations as less threatening.

• These effects are not driven by affect or self-esteem but by authenticity.

• Interventions that increase authenticity buffer against rejection.

From the General Discussion

Despite living in a society where connections with others can be easily made (e.g., through social media platforms such as Twitter and Facebook), people often experience loneliness and social exclusion.  Social media allows us to find out what we are missing in our write-and share or snap-and-share culture.  Paradoxically, this real-time ability to stay connected can make the sting of exclusion much more painful. The present studies establish that feeling authentic can dampen threat responses and reduce feelings of rejection and perceived social exclusion.

Thursday, March 25, 2021

Religious Affiliation and Conceptions of the Moral Domain

Levine, S., Rottman, J., et al.
(2019, November 14). 

Abstract

What is the relationship between religious affiliation and conceptions of the moral domain? Putting aside the question of whether people from different religions agree about how to answer moral questions, here we investigate a more fundamental question: How much disagreement is there across religions about which issues count as moral in the first place? That is, do people from different religions conceptualize the scope of morality differently? Using a new methodology to map out how individuals conceive of the moral domain, we find dramatic differences among adherents of different religions. Mormon and Muslim participants moralized their religious norms, while Jewish participants did not. Hindu participants in our sample did not seem to make a moral/non-moral distinction of the same kind. These results suggest a profound relationship between religious affiliation and conceptions of the scope of the moral domain.

From the Discussion

We have found that it is neither true that secular people and religious people share a common conception of the moral domain nor that religious morality is expanded beyond secular morality in a uniform manner. Furthermore, when participants in a group did make a moral/non-moral distinction, there was broad agreement that norms related to harm, justice, and rights counted as moral norms. However, some religious individuals (such as the Mormon and Muslim participants) also moralized norms from their own religion that are not related to these themes. Meanwhile, others (such as the Jewish participants) acknowledged the special status of their own norms but did not moralize them. Yet others (such as the Hindu participants in our sample) seemed to make no distinction between the moral and the non-moral in the way that the other groups did. Our dataset, therefore, suggests that any theory about the lay conception of the scope of morality needs to explain why the Jewish participants in our dataset do not consider their own norms to be moral norms and why Mormons and Muslim participants do.To the extent that SDT and MFT make any predictions about how lay people decide whether a norm is moral, they too must find a way to explain these datasets.

Wednesday, March 24, 2021

Does observability amplify sensitivity to moral frames? Evaluating a reputation-based account of moral preferences

Capraro, V., Jordan, J., & Tappin, B. M. 
(2020, April 9). 
https://doi.org/10.31234/osf.io/bqjcv

Abstract

A growing body of work suggests that people are sensitive to moral framing in economic games involving prosociality, suggesting that people hold moral preferences for doing the “right thing”. What gives rise to these preferences? Here, we evaluate the explanatory power of a reputation-based account, which proposes that people respond to moral frames because they are motivated to look good in the eyes of others. Across four pre-registered experiments (total N = 9,601), we investigated whether reputational incentives amplify sensitivity to framing effects. Studies 1-3 manipulated (i) whether moral or neutral framing was used to describe a Trade-Off Game (in which participants chose between prioritizing equality or efficiency) and (ii) whether Trade-Off Game choices were observable to a social partner in a subsequent Trust Game. These studies found that observability does not significantly amplify sensitivity to moral framing. Study 4 ruled out the alternative explanation that the observability manipulation from Studies 1-3 is too weak to influence behavior. In Study 4, the same observability manipulation did significantly amplify sensitivity to normative information (about what others see as moral in the Trade-Off Game). Together, these results suggest that moral frames may tap into moral preferences that are relatively deeply internalized, such that the power of moral frames is not strongly enhanced by making the morally-framed behavior observable to others.

From the Discussion

Our results have implications for interventions that draw on moral framing effects to encourage socially desirable behavior. They suggest that such interventions can be successful even when behavior is not observable to others and thus reputation is not at stake—and in fact, that the efficacy of moral framing effects is not strongly enhanced by making behavior observable. Thus, our results suggest that targeting contexts where reputation is at stake is not an especially important priority for individuals seeking to maximize the impact of interventions based on moral framing.  This  conclusion  provides  an  optimistic  view  of  the  potential  of such interventions, given that there may be many contexts in which it is difficult to make behavior observable but yet possible to frame a decision in a way that encourages prosociality—for example, when  crowdsourcing  donations  anonymously(or  nearly  anonymously)on  the Internet.Future research should investigate the power of moral framing to promote prosocial behaviour in anonymous contexts outside of the laboratory.

Tuesday, March 23, 2021

Social Identity Theory: The Science of "Us vs. Them"


One of the most fundamental insights in the psychology of prejudice and discrimination is can be found in "social identity theory." The theory, pioneered by Henri Tajfel and his colleagues, helps explain how the mere existence of ingroups and outgroups can give rise to hostility. The "us vs. them" mentality and the tribalism it evokes can be at least part of why groups have such trouble seeing eye to eye.

Originally posted 25 Jan 21

Monday, March 22, 2021

The Mistrust of Science

Atul Gawande
The New Yorker
Originally posted 01 June 2016

Here is an excerpt:

The scientific orientation has proved immensely powerful. It has allowed us to nearly double our lifespan during the past century, to increase our global abundance, and to deepen our understanding of the nature of the universe. Yet scientific knowledge is not necessarily trusted. Partly, that’s because it is incomplete. But even where the knowledge provided by science is overwhelming, people often resist it—sometimes outright deny it. Many people continue to believe, for instance, despite massive evidence to the contrary, that childhood vaccines cause autism (they do not); that people are safer owning a gun (they are not); that genetically modified crops are harmful (on balance, they have been beneficial); that climate change is not happening (it is).

Vaccine fears, for example, have persisted despite decades of research showing them to be unfounded. Some twenty-five years ago, a statistical analysis suggested a possible association between autism and thimerosal, a preservative used in vaccines to prevent bacterial contamination. The analysis turned out to be flawed, but fears took hold. Scientists then carried out hundreds of studies, and found no link. Still, fears persisted. Countries removed the preservative but experienced no reduction in autism—yet fears grew. A British study claimed a connection between the onset of autism in eight children and the timing of their vaccinations for measles, mumps, and rubella. That paper was retracted due to findings of fraud: the lead author had falsified and misrepresented the data on the children. Repeated efforts to confirm the findings were unsuccessful. Nonetheless, vaccine rates plunged, leading to outbreaks of measles and mumps that, last year, sickened tens of thousands of children across the U.S., Canada, and Europe, and resulted in deaths.

People are prone to resist scientific claims when they clash with intuitive beliefs. They don’t see measles or mumps around anymore. They do see children with autism. And they see a mom who says, “My child was perfectly fine until he got a vaccine and became autistic.”

Now, you can tell them that correlation is not causation. You can say that children get a vaccine every two to three months for the first couple years of their life, so the onset of any illness is bound to follow vaccination for many kids. You can say that the science shows no connection. But once an idea has got embedded and become widespread, it becomes very difficult to dig it out of people’s brains—especially when they do not trust scientific authorities. And we are experiencing a significant decline in trust in scientific authorities.


5 years old, and still relevant.

Sunday, March 21, 2021

Who Should Stop Unethical A.I.?

Matthew Hutson
The New Yorker
Originally published 15 Feb 21

Here is an excerpt:

Many kinds of researchers—biologists, psychologists, anthropologists, and so on—encounter checkpoints at which they are asked about the ethics of their research. This doesn’t happen as much in computer science. Funding agencies might inquire about a project’s potential applications, but not its risks. University research that involves human subjects is typically scrutinized by an I.R.B., but most computer science doesn’t rely on people in the same way. In any case, the Department of Health and Human Services explicitly asks I.R.B.s not to evaluate the “possible long-range effects of applying knowledge gained in the research,” lest approval processes get bogged down in political debate. At journals, peer reviewers are expected to look out for methodological issues, such as plagiarism and conflicts of interest; they haven’t traditionally been called upon to consider how a new invention might rend the social fabric.

A few years ago, a number of A.I.-research organizations began to develop systems for addressing ethical impact. The Association for Computing Machinery’s Special Interest Group on Computer-Human Interaction (sigchi) is, by virtue of its focus, already committed to thinking about the role that technology plays in people’s lives; in 2016, it launched a small working group that grew into a research-ethics committee. The committee offers to review papers submitted to sigchi conferences, at the request of program chairs. In 2019, it received ten inquiries, mostly addressing research methods: How much should crowd-workers be paid? Is it O.K. to use data sets that are released when Web sites are hacked? By the next year, though, it was hearing from researchers with broader concerns. “Increasingly, we do see, especially in the A.I. space, more and more questions of, Should this kind of research even be a thing?” Katie Shilton, an information scientist at the University of Maryland and the chair of the committee, told me.

Shilton explained that questions about possible impacts tend to fall into one of four categories. First, she said, “there are the kinds of A.I. that could easily be weaponized against populations”—facial recognition, location tracking, surveillance, and so on. Second, there are technologies, such as Speech2Face, that may “harden people into categories that don’t fit well,” such as gender or sexual orientation. Third, there is automated-weapons research. And fourth, there are tools “to create alternate sets of reality”—fake news, voices, or images.

Saturday, March 20, 2021

The amoral atheist? A cross-national examination of cultural, motivational, and cognitive antecedents of disbelief, and their implications for morality

StÃ¥hl T (2021) 
https://doi.org/10.1371/journal.pone.0246593

Abstract

There is a widespread cross-cultural stereotype suggesting that atheists are untrustworthy and lack a moral compass. Is there any truth to this notion? Building on theory about the cultural, (de)motivational, and cognitive antecedents of disbelief, the present research investigated whether there are reliable similarities as well as differences between believers and disbelievers in the moral values and principles they endorse. Four studies examined how religious disbelief (vs. belief) relates to endorsement of various moral values and principles in a predominately religious (vs. irreligious) country (the U.S. vs. Sweden). Two U.S. M-Turk studies (Studies 1A and 1B, N = 429) and two large cross-national studies (Studies 2–3, N = 4,193), consistently show that disbelievers (vs. believers) are less inclined to endorse moral values that serve group cohesion (the binding moral foundations). By contrast, only minor differences between believers and disbelievers were found in endorsement of other moral values (individualizing moral foundations, epistemic rationality). It is also demonstrated that presumed cultural and demotivational antecedents of disbelief (limited exposure to credibility-enhancing displays, low existential threat) are associated with disbelief. Furthermore, these factors are associated with weaker endorsement of the binding moral foundations in both countries (Study 2). Most of these findings were replicated in Study 3, and results also show that disbelievers (vs. believers) have a more consequentialist view of morality in both countries. A consequentialist view of morality was also associated with another presumed antecedent of disbelief—analytic cognitive style.

Conclusion

The purpose of the present research was to systematically examine how conceptualizations of morality differ between disbelievers and believers, and to explore whether moral psychological differences between these groups could be due to four presumed antecedents of disbelief. The results consistently indicate that disbelievers and believers, in the U.S. as well as in Sweden, are equally inclined to view the individualizing moral foundations, Liberty/oppression, and epistemic rationality as important moral values. However, these studies also point to some consistent cross-national differences in the moral psychology of disbelievers as compared to believers. Specifically, disbelievers are less inclined than believers to endorse the binding moral foundations, and more inclined to engage in consequentialist moral reasoning. The present results further suggest that these differences may stem from disparities in exposure to CREDs, levels of perceived existential threat, and individual differences in cognitive style. It seems plausible that the more constrained and consequentialist view of morality that is associated with disbelief may have contributed to the widespread reputation of atheists as immoral in nature.

(italics added)
-------------------
Bottom line: Atheists are just as moral as religious folks, just that atheists don't use morality to promote social identity.  And, CREDS are credibility enhancing displays.

Friday, March 19, 2021

Religion, parochialism and intuitive cooperation

Isler, O., Yilmaz, O. & John Maule, A. 
Nat Hum Behav (2021). 
https://doi.org/10.1038/s41562-020-01014-3

Abstract

Religions promote cooperation, but they can also be divisive. Is religious cooperation intuitively parochial against atheists? Evidence supporting the social heuristics hypothesis (SHH) suggests that cooperation is intuitive, independent of religious group identity. We tested this prediction in a one-shot prisoner’s dilemma game, where 1,280 practising Christian believers were paired with either a coreligionist or an atheist and where time limits were used to increase reliance on either intuitive or deliberated decisions. We explored another dual-process account of cooperation, the self-control account (SCA), which suggests that visceral reactions tend to be selfish and that cooperation requires deliberation. We found evidence for religious parochialism but no support for SHH’s prediction of intuitive cooperation. Consistent with SCA but requiring confirmation in future studies, exploratory analyses showed that religious parochialism involves decision conflict and concern for strong reciprocity and that deliberation promotes cooperation independent of religious group identity.


------------

In essence, the research replicated the widespread tendency for group bias.  They found Christians were more likely to cooperate with other Christians and, similarly, atheists were more likely to cooperate with other atheists.

But they also found that the participants, particularly the Christians, were able to resist selfish impulses and cooperate more if given time to deliberate and think about their decisions.

Thursday, March 18, 2021

Sacha Baron Cohen On 'Borat' Ethics And Why His Disguise Days Are Over

Terry Gross
npr.org - Fresh Air
Originally published 22 Feb 21

Here is an excerpt:

On whether he thinks that there's an ethical gray area to bringing characters into the world and deceiving people

Those are the discussions that we have in the writers' room continually: Is this ethical? What's the purpose of this scene? Is it just to be funny? Is there some satire? Is that satire worth it? When you're doing stuff like a gun rally and you could get shot, then morally it's very clear. Or if you're undermining one of Trump's inner circle, whose sole aim is to undermine the legitimacy of the election, then, yeah, that's moral. I mean, look at what Rudy did post Borat coming out. He spread this big lie that Trump had won the election. And that lie is so dangerous and so misleading that it led to the attack on the Capitol — and it hasn't ended.

So the morality of seeing how Rudy would react when he was alone in a room with an attractive young woman, I think that morality is pretty clear. I think it's evidence of the misogyny that was trumpeted by the president and was almost a badge of honor with his inner circle. What we did with Rudy was crucial. I mean, we made the movie to have an impact on the election. ... So ethically, I can stand by that all day long.

Is the movie as a whole ethical? Yes. We did it because there was a deeply unethical government in power. And there was no question. ... We had to do what we could to inspire people to vote and remind people of the immorality of the government prior to the election. ... I have no doubt about the morality of this film. I'm very proud of it.

Wednesday, March 17, 2021

Signaling Virtuous Victimhood as Indicators of Dark Triad Personalities

Ok, E., Qian, Y., Strejcek, B., & Aquino, K. 
(2020). Journal of Personality 
and Social Psychology. 

Abstract

We investigate the consequences and predictors of emitting signals of victimhood and virtue. In our first three studies, we show that the virtuous victim signal can facilitate nonreciprocal resource transfer from others to the signaler. Next, we develop and validate a victim signaling scale that we combine with an established measure of virtue signaling to operationalize the virtuous victim construct. We show that individuals with Dark Triad traits—Machiavellianism, Narcissism, Psychopathy—more frequently signal virtuous victimhood, controlling for demographic and socioeconomic variables that are commonly associated with victimization in Western societies. In Study 5, we show that a specific dimension of Machiavellianism—amoral manipulation—and a form of narcissism that reflects a person’s belief in their superior prosociality predict more frequent virtuous victim signaling. Studies 3, 4, and 6 test our hypothesis that the frequency of emitting virtuous victim signal predicts a person’s willingness to engage in and endorse ethically questionable behaviors, such as lying to earn a bonus, intention to purchase counterfeit products and moral judgments of counterfeiters, and making exaggerated claims about being harmed in an organizational context.

General Discussion

Fortune and human imperfection assure that at some point in life everyone will experience suffering, disadvantage, or mistreatment.  When this happens, there will be some who face their burdens in silence, treating it as a private matter they must work out for themselves, and there will others who make a public spectacle of their sufferings, label themselves as victims, and demand compensation for their pain. This latter response is what interests us in this series of studies. Much research documents the intrapsychic and
social costs of being a victim (Bar-Tal, Chernyak-Hai, Schori, & Gundar, 2009; Taylor, Wood, & Lichtman, 1983; Zur, 2013), yet the increasing presence of individuals and groups publicly claiming victim status has led many observers to conclude that Western societies have developed a culture of victimization that makes victim-claiming advantageous (Campbell & Manning, 2018).

As explained earlier, victim signaling can yield many positive personal and social outcomes, such as helping people heal and raising awareness about the conditions that lead to victimization.  Our article focuses on a different set of questions associated with victim signaling, including an examination of its functionality as a social influence tactic, how its effectiveness can be maximized by combining it with a virtue signal, who is likely to emit this dual signal, and whether the frequency of signaling virtuous victimhood can predict certain behaviors and judgments. 

Tuesday, March 16, 2021

The Psychology of Dehumanization



People have an amazing capacity to see their fellow human beings as...not human. Psychologists have studied this both in its blatant and more subtle forms. What does it mean to dehumanize? How can researchers capture everyday dehumanization? Is it just prejudice? What does it say about how we think about non-human animals?

An important, well done 11 minute video.

Published  4 Feb 2021

Monday, March 15, 2021

What is 'purity'? Conceptual murkiness in moral psychology.

Gray, K., DiMaggio, N., Schein, C., 
& Kachanoff, F. (2021, February 3).
https://doi.org/10.31234/osf.io/vfyut

Abstract

Purity is an important topic in psychology. It has a long history in moral discourse, has helped catalyze paradigm shifts in moral psychology, and is thought to underlie political differences. But what exactly is “purity?” To answer this question, we review the history of purity and then systematically examine 158 psychology papers that define and operationalization (im)purity. In contrast to the many concepts defined by what they are, purity is often understood by what it isn’t—obvious dyadic harm. Because of this “contra”-harm understanding, definitions and operationalizations of purity are quite varied. Acts used to operationalize impurity include taking drugs, eating your sister’s scab, vandalizing a church, wearing unmatched clothes, buying music with sexually explicit lyrics, and having a messy house. This heterogeneity makes purity a “chimera”—an entity composed of various distinct elements. Our review reveals that the “contra-chimera” of purity has 9 different scientific understandings, and that most papers define purity differently from how they operationalize it. Although people clearly moralize diverse concerns—including those related to religion, sex, and food—such heterogeneity in conceptual definitions is problematic for theory development. Shifting definitions of purity provide “theoretical degrees of freedom” that make falsification extremely difficult. Doubts about the coherence and consistency of purity raise questions about key purity-related claims of modern moral psychology, including the nature of political differences and the cognitive foundations of moral judgment.

Sunday, March 14, 2021

The ”true me”—one or many?

Berent, I., & Platt, M. (2019, December 9). 
https://doi.org/10.31234/osf.io/tkur5

Abstract

Recent results suggest that people hold a notion of the true self, distinct from the self. Here, we seek to further elucidate the “true me”—whether it is good or bad, material or immaterial. Critically, we ask whether the true self is unitary. To address these questions, we invited participants to reason about John—a character who simultaneously exhibits both positive and negative moral behaviors. John’s character was gauged via two tests--a brain scan and a behavioral test, whose results invariably diverged (i.e., one test indicated that John’s moral core is positive and another negative). Participants assessed John’s true self along two questions: (a) Did John commit his acts (positive and negative) freely? and (b) What is John’s essence really? Responses to the two questions diverged. When asked to  evaluate John’s moral core explicitly (by reasoning about his free will), people invariably descried John’s true self as good. But when John’s moral core was assessed implicitly (by considering his essence), people sided with the outcomes of the brain test. These results demonstrate that people hold conflicting notions of the true self. We formally support this proposal by presenting a grammar of the true self, couched within Optimality Theory. We
show that the constraint ranking necessary to capture explicit and implicit view of the true self are distinct. Our intuitive belief in a true unitary “me” is thus illusory.

From the Conclusion

When we consider a person’s moral core explicitly (by evaluating which acts they commit freely), we consider them as having a single underlying moral valence (rather multiple competing attributes), and that moral core is decidedly good. Thus, our explicit notion of true moral self is good and unitary, a proposal that is supported by previous findings (e.g., De Freitas & Cikara, 2018; Molouki & Bartels, 2017; Newman et al., 2014b; Tobia, 2016).  But when we consider the person’s moral fiber implicitly, we evaluate their essence--a notion that is devoid of specific moral valence (good or bad), but is intimately linked to their material body. This material view of essence is in line with previous results, suggesting that children (Gelman, 2003; Gelman & Wellman, 1991) and infants (Setoh et al., 2013) believe that living things must have “insides”, and that their essence corresponds to a piece of matter (Springer & Keil, 1991) that is localized at the center of the body (Newman & Keil, 2008). Further support for this material notion of essence is presented by people’s tendency to conclude that psychological traits that are localized in the brain are more likely to be innate (Berent et al., 2019; Berent et al., 2019, September 10). The persistent link between John’s essence and the outcomes of the brain probe in also in line with this proposal. 

Saturday, March 13, 2021

The Dynamics of Motivated Beliefs

Zimmermann, Florian. 2020.
American Economic Review, 110 (2): 337-61.

Abstract
A key question in the literature on motivated reasoning and self-deception is how motivated beliefs are sustained in the presence of feedback. In this paper, we explore dynamic motivated belief patterns after feedback. We establish that positive feedback has a persistent effect on beliefs. Negative feedback, instead, influences beliefs in the short run, but this effect fades over time. We investigate the mechanisms of this dynamic pattern, and provide evidence for an asymmetry in the recall of feedback. Finally, we establish that, in line with theoretical accounts, incentives for belief accuracy mitigate the role of motivated reasoning.

From the Discussion

In light of the finding that negative feedback has only limited effects on beliefs in the long run, the question arises as to whether people should become entirely delusional about themselves over time. Note that results from the incentive treatments highlight that incentives for recall accuracy bound the degree of self-deception and thereby possibly prevent motivated agents from becoming entirely delusional. Further note that there exists another rather mechanical counterforce, which is that the perception of feedback likely changes as people become more confident. In terms of the experiment, if a subject believes that the chances of ranking in the upper half are mediocre, then that subject will likely perceive two comparisons out of three as positive feedback. If, instead, the same subject is almost certain they rank in the upper half, then that subject will likely perceive the same feedback as rather negative. Note that this “perception effect” is reflected in the Bayesian definition of feedback that we report as a robustness check in the Appendix of the paper. An immediate consequence of this change in perception is that the more confident an agent becomes, the more likely it is that they will obtain negative feedback. Unless an agent does not incorporate negative feedback at all, this should act as a force that bounds people’s delusions.

Friday, March 12, 2021

The Psychology of Existential Risk: Moral Judgments about Human Extinction

Schubert, S., Caviola, L. & Faber, N.S. 
Sci Rep 9, 15100 (2019). 
https://doi.org/10.1038/s41598-019-50145-9

Abstract

The 21st century will likely see growing risks of human extinction, but currently, relatively small resources are invested in reducing such existential risks. Using three samples (UK general public, US general public, and UK students; total N = 2,507), we study how laypeople reason about human extinction. We find that people think that human extinction needs to be prevented. Strikingly, however, they do not think that an extinction catastrophe would be uniquely bad relative to near-extinction catastrophes, which allow for recovery. More people find extinction uniquely bad when (a) asked to consider the extinction of an animal species rather than humans, (b) asked to consider a case where human extinction is associated with less direct harm, and (c) they are explicitly prompted to consider long-term consequences of the catastrophes. We conclude that an important reason why people do not find extinction uniquely bad is that they focus on the immediate death and suffering that the catastrophes cause for fellow humans, rather than on the long-term consequences. Finally, we find that (d) laypeople—in line with prominent philosophical arguments—think that the quality of the future is relevant: they do find extinction uniquely bad when this means forgoing a utopian future.

Discussion

Our studies show that people find that human extinction is bad, and that it is important to prevent it. However, when presented with a scenario involving no catastrophe, a near-extinction catastrophe and an extinction catastrophe as possible outcomes, they do not see human extinction as uniquely bad compared with non-extinction. We find that this is partly because people feel strongly for the victims of the catastrophes, and therefore focus on the immediate consequences of the catastrophes. The immediate consequences of near-extinction are not that different from those of extinction, so this naturally leads them to find near-extinction almost as bad as extinction. Another reason is that they neglect the long-term consequences of the outcomes. Lastly, their empirical beliefs about the quality of the future make a difference: telling them that the future will be extraordinarily good makes more people find extinction uniquely bad.

Thus, when asked in the most straightforward and unqualified way, participants do not find human extinction uniquely bad. 

Thursday, March 11, 2021

Decision making can be improved through observational learning

Yoon, H., Scopelliti, I. & Morewedge, C.
Organizational Behavior and 
Human Decision Processes
Volume 162, January 2021, 
Pages 155-188

Abstract

Observational learning can debias judgment and decision making. One-shot observational learning-based training interventions (akin to “hot seating”) can produce reductions in cognitive biases in the laboratory (i.e., anchoring, representativeness, and social projection), and successfully teach a decision rule that increases advice taking in a weight on advice paradigm (i.e., the averaging principle). These interventions improve judgment, rule learning, and advice taking more than practice. We find observational learning-based interventions can be as effective as information-based interventions. Their effects are additive for advice taking, and for accuracy when advice is algorithmically optimized. As found in the organizational learning literature, explicit knowledge transferred through information appears to reduce the stickiness of tacit knowledge transferred through observational learning. Moreover, observational learning appears to be a unique debiasing training strategy, an addition to the four proposed by Fischhoff (1982). We also report new scales measuring individual differences in anchoring, representativeness heuristics, and social projection.

Highlights

• Observational learning training interventions improved judgment and decision making.

• OL interventions reduced anchoring bias, representativeness, and social projection.

• Observational learning training interventions increased advice taking.

• Observational learning and information complementarily taught a decision rule.

• We provide new bias scales for anchoring, representativeness, and social projection.

Wednesday, March 10, 2021

Thought-detection: AI has infiltrated our last bastion of privacy

Gary Grossman
VentureBeat
Originally posted 13 Feb 21

Our thoughts are private – or at least they were. New breakthroughs in neuroscience and artificial intelligence are changing that assumption, while at the same time inviting new questions around ethics, privacy, and the horizons of brain/computer interaction.

Research published last week from Queen Mary University in London describes an application of a deep neural network that can determine a person’s emotional state by analyzing wireless signals that are used like radar. In this research, participants in the study watched a video while radio signals were sent towards them and measured when they bounced back. Analysis of body movements revealed “hidden” information about an individual’s heart and breathing rates. From these findings, the algorithm can determine one of four basic emotion types: anger, sadness, joy, and pleasure. The researchers proposed this work could help with the management of health and wellbeing and be used to perform tasks like detecting depressive states.

Ahsan Noor Khan, a PhD student and first author of the study, said: “We’re now looking to investigate how we could use low-cost existing systems, such as Wi-Fi routers, to detect emotions of a large number of people gathered, for instance in an office or work environment.” Among other things, this could be useful for HR departments to assess how new policies introduced in a meeting are being received, regardless of what the recipients might say. Outside of an office, police could use this technology to look for emotional changes in a crowd that might lead to violence.

The research team plans to examine public acceptance and ethical concerns around the use of this technology. Such concerns would not be surprising and conjure up a very Orwellian idea of the ‘thought police’ from 1984. In this novel, the thought police watchers are expert at reading people’s faces to ferret out beliefs unsanctioned by the state, though they never mastered learning exactly what a person was thinking.


Tuesday, March 9, 2021

How social learning amplifies moral outrage expression in online social networks

Brady, W. J., McLoughlin, K. L., et al.
(2021, January 19).
https://doi.org/10.31234/osf.io/gf7t5

Abstract

Moral outrage shapes fundamental aspects of human social life and is now widespread in online social networks. Here, we show how social learning processes amplify online moral outrage expressions over time. In two pre-registered observational studies of Twitter (7,331 users and 12.7 million total tweets) and two pre-registered behavioral experiments (N = 240), we find that positive social feedback for outrage expressions increases the likelihood of future outrage expressions, consistent with principles of reinforcement learning. We also find that outrage expressions are sensitive to expressive norms in users’ social networks, over and above users’ own preferences, suggesting that norm learning processes guide online outrage expressions. Moreover, expressive norms moderate social reinforcement of outrage: in ideologically extreme networks, where outrage expression is more common, users are less sensitive to social feedback when deciding whether to express outrage. Our findings highlight how platform design interacts with human learning mechanisms to impact moral discourse in digital public spaces.

From the Conclusion

At first blush, documenting the role of reinforcement learning in online outrage expressions may seem trivial. Of course, we should expect that a fundamental principle of human behavior, extensively observed in offline settings, will similarly describe behavior in online settings. However, reinforcement learning of moral behaviors online, combined with the design of social media platforms, may have especially important social implications. Social media newsfeed algorithms can directly impact how much social feedback a given post receives by determining how many other users are exposed to that post. Because we show here that social feedback impacts users’ outrage expressions over time, this suggests newsfeed algorithms can influence users’ moral behaviors by exploiting their natural tendencies for reinforcement learning.  In this way, reinforcement learning on social media differs from reinforcement learning in other environments because crucial inputs to the learning process are shaped by corporate interests. Even if platform designers do not intend to amplify moral outrage, design choices aimed at satisfying other goals --such as profit maximization via user engagement --can indirectly impact moral behavior because outrage-provoking content draws high engagement. Given that moral outrage plays a critical role in collective action and social change, our data suggest that platform designers have the ability to influence the success or failure of social and political movements, as well as informational campaigns designed to influence users’ moral and political attitudes. Future research is required to understand whether users are aware of this, and whether making such knowledge salient can impact their online behavior.


People are more likely to express online "moral outrage" if they have either been rewarded for it in the past or it's common in their own social network.  They are even willing to express far more moral outrage than they genuinely feel in order to fit in.

Monday, March 8, 2021

Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy

Stix, C., Maas, M.M.
AI Ethics (2021). 
https://doi.org/10.1007/s43681-020-00037-w

Abstract

Recent progress in artificial intelligence (AI) raises a wide array of ethical and societal concerns. Accordingly, an appropriate policy approach is urgently needed. While there has been a wave of scholarship in this field, the research community at times appears divided amongst those who emphasize ‘near-term’ concerns and those focusing on ‘long-term’ concerns and corresponding policy measures. In this paper, we seek to examine this alleged ‘gap’, with a view to understanding the practical space for inter-community collaboration on AI policy. We propose to make use of the principle of an ‘incompletely theorized agreement’ to bridge some underlying disagreements, in the name of important cooperation on addressing AI’s urgent challenges. We propose that on certain issue areas, scholars working with near-term and long-term perspectives can converge and cooperate on selected mutually beneficial AI policy projects, while maintaining their distinct perspectives.

From the Conclusion

AI has raised multiple societal and ethical concerns. This highlights the urgent need for suitable and impactful policy measures in response. Nonetheless, there is at present an experienced fragmentation in the responsible AI policy community, amongst clusters of scholars focusing on ‘near-term’ AI risks, and those focusing on ‘longer-term’ risks. This paper has sought to map the practical space for inter-community collaboration, with a view towards the practical development of AI policy.

As such, we briefly provided a rationale for such collaboration, by reviewing historical cases of scientific community conflict or collaboration, as well as the contemporary challenges facing AI policy. We argued that fragmentation within a given community can hinder progress on key and urgent policies. Consequently, we reviewed a number of potential (epistemic, normative or pragmatic) sources of disagreement in the AI ethics community, and argued that these trade-offs are often exaggerated, and at any rate do not need to preclude collaboration. On this basis, we presented the novel proposal for drawing on the constitutional law principle of an ‘incompletely theorized agreement’, for the communities to set aside or suspend these and other disagreements for the purpose of achieving higher-order AI policy goals of both communities in selected areas. We, therefore, non-exhaustively discussed a number of promising shared AI policy areas which could serve as the sites for such agreements, while also discussing some of the overall limits of this framework.

Sunday, March 7, 2021

Why do inequality and deprivation produce high crime and low trust?


De Courson, B., Nettle, D. 
Sci Rep 11, 1937 (2021). 
https://doi.org/10.1038/s41598-020-80897-8

Abstract

Humans sometimes cooperate to mutual advantage, and sometimes exploit one another. In industrialised societies, the prevalence of exploitation, in the form of crime, is related to the distribution of economic resources: more unequal societies tend to have higher crime, as well as lower social trust. We created a model of cooperation and exploitation to explore why this should be. Distinctively, our model features a desperation threshold, a level of resources below which it is extremely damaging to fall. Agents do not belong to fixed types, but condition their behaviour on their current resource level and the behaviour in the population around them. We show that the optimal action for individuals who are close to the desperation threshold is to exploit others. This remains true even in the presence of severe and probable punishment for exploitation, since successful exploitation is the quickest route out of desperation, whereas being punished does not make already desperate states much worse. Simulated populations with a sufficiently unequal distribution of resources rapidly evolve an equilibrium of low trust and zero cooperation: desperate individuals try to exploit, and non-desperate individuals avoid interaction altogether. Making the distribution of resources more equal or increasing social mobility is generally effective in producing a high cooperation, high trust equilibrium; increasing punishment severity is not.

From the Discussion

Within criminology, our prediction of risky exploitative behaviour when in danger of falling below a threshold of desperation is reminiscent of Merton’s strain theory of deviance. Under this theory, deviance results when individuals have a goal (remaining constantly above the threshold of participation in society), but the available legitimate means are insufficient to get them there (neither foraging alone nor cooperation has a large enough one-time payoff). They thus turn to risky alternatives, despite the drawbacks of these (see also Ref.32 for similar arguments). This explanation is not reducible to desperation making individuals discount the future more steeply, which is often invoked as an explanation for criminality. Agents in our model do not face choices between smaller-sooner and larger-later rewards; the payoff for exploitation is immediate, whether successful or unsuccessful. Also note the philosophical differences between our approach and ‘self-control’ styles of explanation. Those approaches see offending as deficient decision-making: it would be in people’s interests not to offend, but some can’t manage it (see Ref.35 for a critical review). Like economic and behavioural-ecological theories of crime more generally, ours assumes instead that there are certain situations or states where offending is the best of a bad set of available options.

Saturday, March 6, 2021

Robots, Ethics, and Intimacy: The Need for Scientific Research

Borenstein J., Arkin R. 
(2019) Philosophical Studies Series, 
vol 134. Springer, Cham. 

Abstract

Intimate relationships between robots and human beings may begin to form in the near future. Market forces, customer demand, and other factors may drive the creation of various forms of robots to which humans may form strong emotional attachments. Yet prior to the technology becoming fully actualized, numerous ethical, legal, and social issues must be addressed. This could be accomplished in part by establishing a rigorous scientific research agenda in the realm of intimate robotics, the aim of which would be to explore what effects the technology may have on users and on society more generally. Our goal is not to resolve whether the development of intimate robots is ethically appropriate. Rather, we contend that if such robots are going to be designed, then an obligation emerges to prevent harm that the technology could cause.

Friday, March 5, 2021

Free to blame? Belief in free will is related to victim blaming

Genschow, O., & Vehlow, B.
Consciousness and Cognition
Volume 88, February 2021, 103074

Abstract

The more people believe in free will, the harsher their punishment of criminal offenders. A reason for this finding is that belief in free will leads individuals to perceive others as responsible for their behavior. While research supporting this notion has mainly focused on criminal offenders, the perspective of the victims has been neglected so far. We filled this gap and hypothesized that individuals’ belief in free will is positively correlated with victim blaming—the tendency to make victims responsible for their bad luck. In three studies, we found that the more individuals believe in free will, the more they blame victims. Study 3 revealed that belief in free will is correlated with victim blaming even when controlling for just world beliefs, religious worldviews, and political ideology. The results contribute to a more differentiated view of the role of free will beliefs and attributed intentions.

Highlights

• Past research indicated that belief in free will increases the perception of criminal offenders.

• However, this research ignored the perception of the victims.

• We filled this gap by conducting three studies.

• All studies find that belief in free will correlates with the tendency to blame victims.

From the Discussion

In the last couple of decades, claims that free will is nothing more than an illusion have become prevalent in the popular press (e.g., Chivers 2010; Griffin, 2016; Wolfe, 1997).  Based on such claims, scholars across disciplines started debating potential societal consequences for the case that people would start disbelieving in free will. For example, some philosophers argued that disbelief in free will would have catastrophic consequences, because people would no longer try to control their behavior and start acting immorally (e.g., Smilansky, 2000, 2002). Likewise, psychological research has mainly focused on the
downsides of disbelief in free will. For example, weakening free will belief led participants to behave less morally and responsibly (Baumeister et al., 2009; Protzko et al., 2016; Vohs & Schooler, 2008). In contrast to these results, our findings illustrate a more positive side of disbelief in free will, as higher levels of disbelief in free will would reduce victim blaming. 

Thursday, March 4, 2021

‘Pastorally dangerous’: U.S. bishops risk causing confusion about vaccines, ethicists say

Michael J. O’Loughlin
America Magazine
Originally published March 02, 2021

Here is an excerpt:

Anthony Egan, S.J., a Jesuit priest and lecturer in theology in South Africa, said church leaders publishing messages about hypothetical situations during a crisis is “unhelpful” as Catholics navigate life in a pandemic.

“I think it’s pastorally dangerous because people are dealing with all kinds of crises—people are faced with unemployment, people are faced with disease, people are faced with death—and to make this kind of statement just adds to the general feeling of unease, a general feeling of crisis,” Father Egan said, noting that in South Africa, which has been hard hit by a more aggressive variant, the Johnson & Johnson vaccine is the only available option. “I don’t think that’s pastorally helpful.”

The choice about taking a vaccine like Johnson & Johnson’s must come down to individual conscience, he said. “I think it’s irresponsible to make a claim that you must absolutely not or absolutely must take the drug,” he said.

Ms. Fullam agreed, saying modern life is filled with difficult dilemmas stemming from previous injustices and “one of the great things about the Catholic moral tradition is that we recognize the world is a messy place, but we don’t insist Catholics stay away from that messiness.” Catholics, she said, are called “to think about how to make the situation better” rather than retreat in the face of complexity and given the ongoing pandemic, receiving a vaccine with a remote connection to abortion could be the right decision—especially in communities where access to vaccines might be difficult.

Wednesday, March 3, 2021

Evolutionary biology meets consciousness: essay review

Browning, H., Veit, W. 
Biol Philos 36, 5 (2021). 
https://doi.org/10.1007/s10539-021-09781-7

Abstract

In this essay, we discuss Simona Ginsburg and Eva Jablonka’s The Evolution of the Sensitive Soul from an interdisciplinary perspective. Constituting perhaps the longest treatise on the evolution of consciousness, Ginsburg and Jablonka unite their expertise in neuroscience and biology to develop a beautifully Darwinian account of the dawning of subjective experience. Though it would be impossible to cover all its content in a short book review, here we provide a critical evaluation of their two key ideas—the role of Unlimited Associative Learning in the evolution of, and detection of, consciousness and a metaphysical claim about consciousness as a mode of being—in a manner that will hopefully overcome some of the initial resistance of potential readers to tackle a book of this length.

Here is one portion:

Modes of being

The second novel idea within their book is to conceive of consciousness as a new mode of being, rather than a mere trait. This part of their argument may appear unusual to many operating in the debate, not the least because this formulation—not unlike their choice to include Aristotle’s sensitive soul in the title—evokes a sense of outdated and strange metaphysics. We share some of this opposition to this vocabulary, but think it best conceived as a metaphor.

They begin their book by introducing the idea of teleological (goal-directed) systems and the three ‘modes of being’, taken from the works of Aristotle, each of which is considered to have a unique telos (goal). These are: life (survival/reproduction), sentience (value ascription to stimuli), and rationality (value ascription to concepts). The focus of this book is the second of these—the “sensitive soul”. Rather than a trait, such as vision, G&J see consciousness as a mode of being, in the same way as the emergence of life and rational thought also constitute new modes of being.

In several places throughout their book, G&J motivate their account through this analogy, i.e. by drawing a parallel from consciousness to life and/or rationality. Neither, they think, can be captured in a simple definition or trait, thus explaining the lack of progress on trying to come up with definitions for these phenomena. Compare their discussion of the distinction between life and non-life. Life, they argue, is not a functional trait that organisms possess, but rather a new way of being that opens up new possibilities; so too with consciousness. It is a new form of biological organization at a level above the organism that gives rise to a “new type of goal-directed system”, one which faces a unique set of challenges and opportunities. They identify three such transitions—the transition from non-life to life (the “nutritive soul”), the transition from non-conscious to conscious (the “sensitive soul”) and the transition from non-rational to rational (the “rational soul”). All three transitions mark a change to a new form of being, one in which the types of goals change. But while this is certainly correct in the sense of constituting a radical transformation in the kinds of goal-directed systems there are, we have qualms with the idea that this formal equivalence or abstract similarity can be used to ground more concrete properties. Yet G&J use this analogy to motivate their UAL account in parallel to unlimited heredity as a transition marker of life.

Tuesday, March 2, 2021

Surprise: 56% of US Catholics Favor Legalized Abortion

Dalia Fahmy
Pew Research Center
Originally posted 20 Oct 20

Here are two excerpts:

1. More than half of U.S. Catholics (56%) said abortion should be legal in all or most cases, while roughly four-in-ten (42%) said it should be illegal in all or most cases, according to the 2019 Pew Research Center survey. Although most Catholics generally approve of legalized abortion, the vast majority favor at least some restrictions. For example, while roughly one-third of Catholics (35%) said abortion should be legal in most cases, only around one-fifth (21%) said it should be legal in all cases. By the same token, 28% of Catholics said abortion should be illegal in most cases, while half as many (14%) said it should be illegal in all cases.

Compared with other Christian groups analyzed in the data, Catholics were about as likely as White Protestants who are not evangelical (60%) and Black Protestants (64%) to support legal abortion, and much more likely than White evangelical Protestants (20%) to do so. Among Americans who are religiously unaffiliated – those who say they are atheist, agnostic or “nothing in particular” – the vast majority (83%) said abortion should be legal in all or most cases.

(cut)

6. Even though most Catholics said abortion should generally be legal, a majority also said abortion is morally wrong. In fact, the share who said that abortion is morally wrong (57%), according to data from a 2017 survey, and the share who said it should be legal (56%) are almost identical. Among adults in other religious groups, there was a wide range of opinions on this question: Almost two-thirds of Protestants (64%) said abortion is morally wrong, including 77% of those who identify with evangelical Protestant denominations. Among the religiously unaffiliated, the vast majority said abortion is morally acceptable (34%) or not a moral issue (42%).

Monday, March 1, 2021

Morality justifies motivated reasoning in the folk ethics of belief

Corey Cusimano & Tania Lombrozo
Cognition
19 January 2021

Abstract

When faced with a dilemma between believing what is supported by an impartial assessment of the evidence (e.g., that one's friend is guilty of a crime) and believing what would better fulfill a moral obligation (e.g., that the friend is innocent), people often believe in line with the latter. But is this how people think beliefs ought to be formed? We addressed this question across three studies and found that, across a diverse set of everyday situations, people treat moral considerations as legitimate grounds for believing propositions that are unsupported by objective, evidence-based reasoning. We further document two ways in which moral considerations affect how people evaluate others' beliefs. First, the moral value of a belief affects the evidential threshold required to believe, such that morally beneficial beliefs demand less evidence than morally risky beliefs. Second, people sometimes treat the moral value of a belief as an independent justification for belief, and on that basis, sometimes prescribe evidentially poor beliefs to others. Together these results show that, in the folk ethics of belief, morality can justify and demand motivated reasoning.

From the General Discussion

5.2. Implications for motivated reasoning

Psychologists have long speculated that commonplace deviations from rational judgments and decisions could reflect commitments to different normative standards for decision making rather than merely cognitive limitations or unintentional errors (Cohen, 1981; Koehler, 1996; Tribe, 1971). This speculation has been largely confirmed in the domain of decision making, where work has documented that people will refuse to make certain decisions because of a normative commitment to not rely on certain kinds of evidence (Nesson, 1985; Wells, 1992), or because of a normative commitment to prioritize deontological concerns over utility-maximizing concerns (Baron & Spranca, 1997; Tetlock et al., 2000). And yet, there has been comparatively little investigation in the domain of belief formation. While some work has suggested that people evaluate beliefs in ways that favor non-objective, or non-evidential criteria (e.g., Armor et al., 2008; Cao et al., 2019; Metz, Weisberg, & Weisberg, 2018; Tenney et al., 2015), this work has failed to demonstrate that people prescribe beliefs that violate what objective, evidence-based reasoning would warrant. To our knowledge, our results are the first to demonstrate that people will knowingly endorse non-evidential norms for belief, and specifically, prescribe motivated reasoning to others.

(cut)

Our findings suggest more proximate explanations for these biases: That lay people see these beliefs as morally beneficial and treat these moral benefits as legitimate grounds for motivated reasoning. Thus, overconfidence or over-optimism may persist in communities because people hold others to lower standards of evidence for adopting morally-beneficial optimistic beliefs than they do for pessimistic beliefs, or otherwise treat these benefits as legitimate reasons to ignore the evidence that one has.