Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, November 30, 2021

Community standards of deception: Deception is perceived to be ethical when it prevents unnecessary harm

Levine, E. E. (2021). 
Journal of Experimental Psychology: 
General. Advance online publication. 
https://doi.org/10.1037/xge0001081

Abstract

We frequently claim that lying is wrong, despite modeling that it is often right. The present research sheds light on this tension by unearthing systematic cases in which people believe lying is ethical in everyday communication and by proposing and testing a theory to explain these cases. Using both inductive and experimental approaches, the present research finds that deception is perceived to be ethical and individuals want to be deceived when deception is perceived to prevent unnecessary harm. This research identifies eight community standards of deception: rules of deception that most people abide by and recognize once articulated, but have never previously been codified. These standards clarify systematic circumstances in which deception is perceived to prevent unnecessary harm, and therefore, circumstances in which deception is perceived to be ethical. This work also documents how perceptions of unnecessary harm influence the use and judgment of deception in everyday life, above and beyond other moral concerns. These findings provide insight into when and why people value honesty and paves the way for future research on when and why people embrace deception. 

From the Discussion

First, this work illuminates how people fundamentally think about deception. Specifically, this work identifies systematic circumstances in which deception is seen as more ethical than honesty, and it provides an organizing framework for understanding these circumstances. A large body of research identifies features of lies that make them seem more or less justifiable and therefore, that lead people to tell greater or fewer lies (e.g., Effron, 2018; Rogers, Zeckhauser, Gino, Norton, & Schweitzer, 2017; Shalvi, Dana, Handgraaf, & De Dreu, 2011). However, little research addresses whether people, upon, introspection, ever actually believe it is right to tell lies; that is, whether lying is ever a morally superior strategy to truth-telling. The present research finds that people believe lying is the right thing to do when it prevents unnecessary harm. Notably, this finding reveals that lay people seem to have a relatively pragmatic view of deception and honesty. Rather than believing deception is a categorical vice – for example, because it damages social trust (Bok 1978; Kant, 1949) or undermines autonomy (Bacon, 1872; Harris, 2011; Kant, 1959/1785) - people seem to conceptualize deception as a tactic that can and should be used to regulate another vice: harm.

Although this view of deception runs counter to prevailing normative claims and much of the existing scholarship in psychology and economics, which paints deception as generally unethical, it is important to note that this idea – that deception is and should be used pragmatically - is not novel. In fact, many of the rules of deception identified in the present research are alluded to in other philosophical, religious, and practical discussions of deception (see Table 2 for a review). Until now, however, these ideas have been siloed in disparate literatures, and behavioral scientists have lacked a parsimonious framework for understanding why individuals endorse deception in various circumstances. The present research identifies a common psychology that explains a number of seemingly unrelated “exceptions” to the norm of honesty, thereby unifying findings and arguments across psychology, religion, and philosophy under a common theoretical framework.

Monday, November 29, 2021

People use mental shortcuts to make difficult decisions – even highly trained doctors delivering babies

Manasvini Singh
The Conversation
Originally published 14 OCT 21

Here is an excerpt:

Useful time-saver or dangerous bias?

A bias arising from a heuristic implies a deviation from an “optimal” decision. However, identifying the optimal decision in real life is difficult because you usually don’t know what could have been: the counterfactual. This is especially relevant in medicine.

Take the win-stay/lose-shift strategy, for example. There are other studies that show that after “bad” events, physicians switch strategies. Missing an important diagnosis makes physicians test more on subsequent patients. Experiencing complications with a drug makes the physician less likely to prescribe it again.

But from a learning perspective, it’s difficult to say that ordering a test after missing a diagnosis is a flawed heuristic. Ordering a test always increases the chance that the physician catches an important diagnosis. So it’s a useful heuristic in some instances – say, for example, the physician had been underordering tests before, or the patient or insurer prefers shelling out the extra money for the chance to detect a cancer early.

In my study, though, switching delivery modes after complications offers no documented guarantees of avoiding future complications. And there is the added consideration of the short- and long-term health consequences of delivery-mode choice for mother and baby. Further, people are generally less tolerant of having inappropriate medical procedures performed on them than they are of being the recipients of unnecessary tests.

Tweaking the heuristic

Can physicians’ reliance on heuristics be lessened? Possibly.

Decision support systems that assist physicians with important clinical decisions are gathering momentum in medicine, and could help doctors course-correct after emotional events such as delivery complications.

For example, such algorithms can be built into electronic health records and perform a variety of tasks: flag physician decisions that appear nonstandard, identify patients who could benefit from a particular decision, summarize clinical information in ways that make it easier for physicians to digest and so on. As long as physicians retain at least some autonomy, decision support systems can do just that – support doctors in making clinical decisions.

Nudges that unobtrusively encourage physicians to make certain decisions can be accomplished by tinkering with the way options are presented – what’s called “choice architecture.” They already work for other clinical decisions.

Sunday, November 28, 2021

Attitude Moralization Within Polarized Contexts: An Emotional Value-Protective Response to Dyadic Harm Cues

D’Amore, C., van Zomeren, M., & Koudenburg, N. 
(2021). Personality and Social Psychology Bulletin.
https://doi.org/10.1177/01461672211047375

Abstract

Polarization about societal issues involves attitudinal conflict, but we know little about how such conflict transforms into moral conflict. Integrating insights on polarization and psychological value protection, we propose a model that predicts when and how attitude moralization (i.e., when attitudes become grounded in core values) may be triggered and develops within polarized contexts. We tested this model in three experiments (total N = 823) in the context of the polarized Zwarte Piet (blackface) debate in the Netherlands. Specifically, we tested the hypotheses that (a) situational cues to dyadic harm in this context (i.e., an outgroup that is perceived as intentionally inflicting harm onto innocent victims) trigger individuals to moralize their relevant attitude, because of (b) emotional value-protective responses. Findings supported both hypotheses across different regional contexts, suggesting that attitude moralization can emerge within polarized contexts when people are exposed to actions by attitudinal opponents perceived as causing dyadic harm.

From the Discussion Section

Harm as dyadic

First, our findings suggest that a focus on dyadic harm may be key to understanding triggers for attitude moralization within polarized contexts. Although most researchers have assigned the general concept of harm a central role in theory on moral judgments (e.g., Kohlberg, 1969; Piaget, 1965; Rozin & Singh, 1999; Turiel, 2006), no previous research on moralization has specifically focused on the dyadic element of harm within polarized contexts. The few empirical studies that examined the role of harm as a general (utilitarian) predictor in the process of attitude moralization about a polarized issue (Brandt et al., 2015; Wisneski & Skitka, 2017) did not find clear support for its predictive power. Interestingly, our consistent finding that strong cues to dyadic harm served as a situational trigger for attitude moralization adds to this literature by suggesting that for understanding moralization triggers within polarized contexts, it is important to understand when people perceive harm as more dyadic (in this case, when a concrete outgroup is perceived as intentionally harming innocent [ingroup] victims). Indeed, we suggest that, in polarized contexts at least, harm could trigger attitude moralization when it is perceived to be dyadic—that is, intentionally harmful. This implies that researchers interested in predicting attitude moralization within polarized contexts should consider conceptualizing and measuring harm as dyadic.

Saturday, November 27, 2021

Hate and meaning in life: How collective, but not personal, hate quells threat and spurs meaning in life

A. Elnakouri, C. Hubley, & I. McGregor
Journal of Experimental Social Psychology
Volume 98, January 2022,

Abstract

Classic and contemporary perspectives link meaning in life to the pursuit of a significant purpose, free from incoherence. The typical assumption is that these meaningful purposes are prosocial, or at least benign. Here, we tested whether hate might also bolster meaning in life, via motivational states underlying significant purpose and coherence. In two studies (N = 847; Study 2 pre-registered), describing hatred (vs. mere dislike) towards collective entities (societal phenomena, institutions, groups), but not individuals, heightened feelings linked to the behavioral approach system (BAS; eagerness, determination, enthusiasm), which underlies a sense of significant purpose, and muted feelings linked to threat and the behavioral inhibition system (BIS; confused, uncertain, conflicted), which underlies a sense of incoherence. This high BAS and low BIS, in turn, predicted meaning in life beyond pre-manipulation levels. Exploratory analyses suggested that personal hatreds did not have the meaning-bolstering effects that collective hatreds had due to meaning-dampening negative feelings. Discussion focuses on motivation for collective and ideological hatreds in threatening circumstances.

Conclusion 

Classic and contemporary  theories in psychology  and beyond pro-pose that various threats can cause zealous responses linked to collective hate  (Arendt,  1951;  Freud,  1937;  Jonas  et  al.,  2014).  The  present research offers one reason behind the appeal of collective hate in such circumstances: it’s ability to spur meaning in life. Shielded from the negativity of personal hate, collective forms of hate can mute threat and BIS-related  feelings,  boost  BAS-related  feelings,  thereby  fostering meaning in life. This research therefore helps us better understand the motivational drivers of hate and why it is an ever-present feature of the human condition.

Friday, November 26, 2021

Paranoia, self-deception and overconfidence

Rossi-Goldthorpe RA, et al (2021) 
PLoS Comput Biol 17(10): e1009453. 
https://doi.org/10.1371/journal.pcbi.1009453

Abstract

Self-deception, paranoia, and overconfidence involve misbeliefs about the self, others, and world. They are often considered mistaken. Here we explore whether they might be adaptive, and further, whether they might be explicable in Bayesian terms. We administered a difficult perceptual judgment task with and without social influence (suggestions from a cooperating or competing partner). Crucially, the social influence was uninformative. We found that participants heeded the suggestions most under the most uncertain conditions and that they did so with high confidence, particularly if they were more paranoid. Model fitting to participant behavior revealed that their prior beliefs changed depending on whether the partner was a collaborator or competitor, however, those beliefs did not differ as a function of paranoia. Instead, paranoia, self-deception, and overconfidence were associated with participants’ perceived instability of their own performance. These data are consistent with the idea that self-deception, paranoia, and overconfidence flourish under uncertainty, and have their roots in low self-esteem, rather than excessive social concern. The model suggests that spurious beliefs can have value–self-deception is irrational yet can facilitate optimal behavior. This occurs even at the expense of monetary rewards, perhaps explaining why self-deception and paranoia contribute to costly decisions which can spark financial crashes and devastating wars.

Author summary

Paranoia is the belief that others intend to harm you. Some people think that paranoia evolved to serve a collational function and should thus be related to the mechanisms of group membership and reputation management. Others have argued that its roots are much more basic, being based instead in how the individual models and anticipates their world–even non-social things. To adjudicate we gave participants a difficult perceptual decision-making task, during which they received advice on what to decide from a partner, who was either a collaborator (in their group) or a competitor (outside of their group). Using computational modeling of participant choices which allowed us to estimate the role of social and non-social processes in the decision, we found that the manipulation worked: people placed a stronger prior weight on the advice from a collaborator compared to a competitor. However, paranoia did not interact with this effect. Instead, paranoia was associated with participants’ beliefs about their own performance. When those beliefs were poor, paranoid participants relied heavily on the advice, even when it contradicted the evidence. Thus, we find a mechanistic link between paranoia, self-deception, and over confidence.

Thursday, November 25, 2021

APF Gold Medal Award for Life Achievement in the Practice of Psychology: Samuel Knapp

American Psychologist, 76(5), 812–814. 

This award recognizes a distinguished career and enduring contribution to the practice of psychology. Samuel Knapp’s long, distinguished career has resulted in demonstrable effects and significant contributions to best practices in professionalism, ethics education, positive ethics, and legislative advocacy as Director of Professional Affairs for the Pennsylvania Psychological Association and as an ethics educator extraordinaire. Dr. Knapp’s work has modified the way psychologists think about professional ethics through education, from avoiding disciplinary consequences to promoting overarching ethical principles to achieve the highest standards of ethical behavior. His focus on respectful collaboration among psychologists promotes honesty through nonjudgmental conversations. His Ethics Educators Workshop and other continuing education programs have brought together psychology practitioners and faculty to focus deeply on ethics and resulted in the development of the APA Ethics Educators Award.

From the Biography section

Ethics education became especially important in Pennsylvania when the Pennsylvania State Board of Psychology mandated ethics as part of its continuing education requirement. But even before that, members of the PPA Ethics Committee and Board of Directors, saw ethics education as a vehicle to help psychologists to improve the quality of their services to their patients. Also, to the extent that ethics education can help promote good decision-making, it could also reduce the emotional burden that professional psychologists often feel when faced with difficult ethical situations. Often the continuing education programs were interactive with the secondary goals of helping psychologists to build contacts with each other and an opportunity for the presenters to promote authentic and compassion-driven approaches to teaching ethics. Yes, Sam and the other PPA Ethics educators, such as the PPA attorney Rachael Baturin, also taught the laws, ethics codes, and the risk management strategies. facts.  But these were only one component of PPA’s ethics education program. More important was the development of a cadre of psychologists/ethicists who taught most of these continuing education programs.


Wednesday, November 24, 2021

Moral masters or moral apprentices? A connectionist account of sociomoral evaluation in preverbal infants

Benton, D. T., & Lapan, C. 
(2021, February 21). 
https://doi.org/10.31234/osf.io/mnh35

Abstract

Numerous studies suggest that preverbal infants possess the ability to make sociomoral judgements and demonstrate a preference for prosocial agents. Some theorists argue that infants possess an “innate moral core” that guides their sociomoral reasoning. However, we propose that infants’ capacity for putative sociomoral evaluation and reasoning can just as likely be driven by a domain-general associative-learning mechanism that is sensitive to agent action. We implement this theoretical account in a connectionist computational model and show that it can account for the pattern of results in Hamlin et al. (2007), Hamlin and Wynn (2011), Hamlin (2013), and Hamlin, Wynn, Bloom, and Mahajan (2011). These are pioneering studies in this area and were among the first to examine sociomoral evaluation in preverbal infants. Based on the results of 5 computer simulations, we suggest that an associative-learning mechanism—instantiated as a computational (connectionist) model—can account for previous findings on preverbal infants’ capacity for sociomoral evaluation. These results suggest that an innate moral core may not be necessary to account for sociomoral evaluation in infants.

From the General Discussion

The simulations suggest the preverbal infants’ reliable choice of helpers over hinderers inHamlin et al. (2007), Hamlin and Wynn (2011), Hamlin (2013), and Hamlin et al. (2011) could have been based on extensive real-world experience with various kinds of actions (e.g., concordant action and discordant action) and an expectation—based on a learned second-order correlation—that agents that engage in certain kinds of actions (e.g., concordant action) have the capacity for interaction, whereas agents that engage in certain kinds of other actions (e.g., discordant action) either do not have the capacity for interaction or have less of a capacity for it. 

Broadly, these results are consistent with work by Powell and Spelke (2018). They found that 4- to 5½-month-old infants looked longer at characters that engaged in concordant (i.e., imitative) action with other characters than characters that engaged in discordant (i.e., non-imitative) action with other characters (Exps. 1 and 2). Specifically, infants looked longer at characters that engaged in the same jumping motion and made the same sound as a target character than those characters that engaged in the same jumping motion but made a different sound than the target character. Our results are also consistent with their finding—which was based on a conceptual replication of Hamlin et al. (2007)—that 12-month-olds reliably reached for a character that engaged in concordant (i.e., imitative) action with the climber than a character that engaged in discordant (i.e., non-imitative) action with it (Exp. 4), even when those actions were non-social. 

Tuesday, November 23, 2021

The Moral Identity Picture Scale (MIPS): Measuring the Full Scope of Moral Identity

Amelia Goranson, Connor O’Fallon, & Kurt Gray
Research Paper, in press

Abstract

Morality is core to people’s identity. Existing moral identity scales measure good/moral vs. bad/immoral, but the Theory of Dyadic Morality highlights two-dimensions of morality: valence (good/moral vs. bad/immoral) and agency (high/agent vs. low/recipient). The Moral Identity Picture Scale (MIPS) measures this full space through 16 vivid pictures. Participants receive scores for each of four moral roles: hero, villain, victim, and beneficiary. The MIPS can also provide summary scores for good, evil, agent, and patient, and possesses test-retest reliability and convergent/divergent validity. Self-identified heroes are more empathic and higher in locus of control, villains are less agreeable and higher in narcissism, victims are higher in depression and lower in self-efficacy, and beneficiaries are lower in Machiavellianism. Although people generally see themselves as heroes, comparisons across known-groups reveals relative differences: Duke MBA students self-identify more as villains, UNC social work students self identify more as heroes, and workplace bullying victims self-identify more as victims. Data also reveals that the beneficiary role is ill-defined, collapsing the two-dimensional space of moral identity into a triangle anchored by hero, villain, and victim.

From the Discussion

We hope that, in providing this new measure of moral identity, future work can examine a broader sense of the moral world—beyond simple identifications of good vs. evil—using our expanded measure that captures not only valence but also role as a moral agent or patient. This measure expands upon previous measures related to moral identity (e.g., Aquino & Reed, 2002; Barriga et al., 2001; Reimer & Wade-Stein, 2004), replicating prior work that we divide the moral world up into good and evil, but demonstrating that the moral identification space includes another component as well: moral agency and moral patiency. Most past work has examined this “agent” side of moral identity—heroes and villains—but we can gain a fuller and more nuanced view of the moral world if we also examine their counterparts—moral patients/recipients. The MIPS provides us with the ability to examine moral identity across these 2 dimensions of valence (positive vs. negative) and agency (agent vs. patient). 

Monday, November 22, 2021

Revisiting Daubert: Judicial Gatekeeping and Expert Ethics in Court

Young, G., Goodman-Delahunty, J.
Psychol. Inj. and Law (2021). 
https://doi.org/10.1007/s12207-021-09428-8

Abstract

This article calls for pragmatic modifications to legal practices for the admissibility of scientific evidence, including forensic psychological science. We submit that Daubert v. Merrell Dow Pharmaceuticals, Inc. (1993) and the other two cases in the U.S. Supreme Court trilogy on expert evidence have largely failed to accomplish their gatekeeping goals to assure the reliability of scientific evidence admitted in court. Reliability refers to validity in psychological terms. Part of the problem with Daubert’s application in court is the gatekeeping function that it ascribes to judges. Most Daubert admissibility challenges are rejected by judges, who might lack the requisite scientific expertise to make informed decisions; educating judges on science might not be an adequate solution. Like others who have put forth the idea, pursuant to Federal Rule of Evidence (FRE) 706, we suggest that court-appointed impartial experts can help judges to adjudicate competing claims on admissibility. We further recommend that an expert witness ethics code sworn to in legal proceedings should be mandatory in all jurisdictions. The journal Psychological Injury and Law calls for comments and further recommendations on modifying Daubert admissibility challenges and procedures in civil and criminal cases to develop best practices to mitigate adversarial allegiance and other unconscious biases in expert decision-making.

Advantages of an Expert Witness Ethics Code Sworn to in Legal Proceedings

We suggest that in the field of psychological injury, jurisdictions in which courts reinforce expert obligations via an ethics code for expert witnesses will lead to more balanced and impartial testimony. The essential principle guiding a science-based expert witness ethics code sworn to in legal proceedings is that the process of forensic assessment, as well as the subsequent proffer of testimony in court based on those assessments, should account for all the reliable evidence gathered in a particular case as determined by methodologies informed by scientific research in the relevant field.  This recommendation is in line with psychological research showing that expert bias is reduced when experts do not focus on a single question or hypothesis, but address a “line up” of competing and alternative conclusions and hypotheses (Dror, 2020). The components of the expert witness oath, like the appointment of a court-appointed expert, encourage experts to adopt a differential diagnosis approach, in which all different conclusions and their probability are presented, rather than one conclusion (Dror, 2020). Opinions, interpretations, and conclusions based on the data, information, and evidence will more likely be impartial, fully scientifically informed, and just.

Sunday, November 21, 2021

Moral labels increase cooperation and costly punishment in a Prisoner’s Dilemma game with punishment option

Mieth, L., Buchner, A. & Bell, R.
Sci Rep 11, 10221 (2021). 
https://doi.org/10.1038/s41598-021-89675-6

Abstract

To determine the role of moral norms in cooperation and punishment, we examined the effects of a moral-framing manipulation in a Prisoner’s Dilemma game with a costly punishment option. In each round of the game, participants decided whether to cooperate or to defect. The Prisoner’s Dilemma game was identical for all participants with the exception that the behavioral options were paired with moral labels (“I cooperate” and “I cheat”) in the moral-framing condition and with neutral labels (“A” and “B”) in the neutral-framing condition. After each round of the Prisoner’s Dilemma game, participants had the opportunity to invest some of their money to punish their partners. In two experiments, moral framing increased moral and hypocritical punishment: participants were more likely to punish partners for defection when moral labels were used than when neutral labels were used. When the participants’ cooperation was enforced by their partners’ moral punishment, moral framing did not only increase moral and hypocritical punishment but also cooperation. The results suggest that moral framing activates a cooperative norm that specifically increases moral and hypocritical punishment. Furthermore, the experience of moral punishment by the partners may increase the importance of social norms for cooperation, which may explain why moral framing effects on cooperation were found only when participants were subject to moral punishment.

General discussion

In human social life, a large variety of behaviors are regulated by social norms that set standards on how individuals should behave. One of these norms is the norm of cooperation. In many situations, people are expected to set aside their egoistic interests to achieve the collective best outcome. Within economic research, cooperation is often studied in social dilemma games. In these games, the complexities of human social interactions are reduced to their incentive structures. However, human behavior is not only determined by monetary incentives. There are many other important determinants of behavior among which social norms are especially powerful. The participants’ decisions in social dilemma situations are thus affected by their interpretation of whether a certain behavior is socially appropriate or inappropriate. Moral labels can help to reduce the ambiguity of the social dilemma game by creating associations to real-life cooperation norms. Thereby, the moral framing may support a moral interpretation of the social dilemma situation, resulting in the moral rejection of egoistic behaviors. Often, social norms are enforced by punishment. It has been argued “that the maintenance of social norms typically requires a punishment threat, as there are almost always some individuals whose self-interest tempts them to violate the norm” [p. 185]. 

Saturday, November 20, 2021

Narrative media’s emphasis on distinct moral intuitions alters early adolescents’ judgments

Hahn, L., et al. (2021).
Journal of Media Psychology: 
Theories, Methods, and Applications. 
Advance online publication.

Abstract

Logic from the model of intuitive morality and exemplars (MIME) suggests that narrative media emphasizing moral intuitions can increase the salience of those intuitions in audiences. To date, support for this logic has been limited to adults. Across two studies, the present research tested MIME predictions in early adolescents (ages 10–14). The salience of care, fairness, loyalty, and authority intuitions was manipulated in a pilot study with verbal prompts (N = 87) and in the main study with a comic book (N = 107). In both studies, intuition salience was measured after induction. The pilot study demonstrated that exposure to verbal prompts emphasizing care, fairness, and loyalty increased the salience of their respective intuitions. The main study showed that exposure to comic books emphasizing all four separate intuitions increased salience of their respective intuitions in early adolescents. Results are discussed in terms of relevance for the MIME and understanding narrative media’s influence on children’s moral judgments. 

Conclusion

Moral education is often at the forefront of parents’ concern for their children’s well-being. Although there is value in directly teaching children moral principles through instruction about what to do or not do, our results support an indirect approach to socializing children’s morality (Haidt & Bjorklund, 2008). This first step at exploring narrative media’s ability to activate moral intuitions in young audiences should be accompanied by additional work examining how “direct route” lessons, such as those contained in the
Ten Commandments, may complement narrative media’s impact on children’s morality.

Our studies provide evidence supporting the MIME’s predictions about narrative content’s influence on moral intuition salience. Future research should build on these findings to examine whether this elevated intuition salience can influence broader values, judgments, and behaviors in children. Such examinations should be especially important for researchers interested in both the mechanism responsible for media’s influence and the extent of media’s impact on malleable, developing children, who may be socialized
by media content.


Friday, November 19, 2021

Biological Essentialism Correlates with (But Doesn’t Cause?) Intergroup Bias

Bailey, A., & Knobe, J. 
(2021, September 17).
https://doi.org/10.31234/osf.io/rx8jc

Abstract

People with biological essentialist beliefs about social groups also tend to endorse biased beliefs about individuals in those groups, including stereotypes, prejudices, and intensified emphasis on the group. These correlations could be due to biological essentialism causing bias, and some experimental studies support this causal direction. Given this prior work, we expected to find that biological essentialism would lead to increased bias compared to a control condition and set out to extend this prior work in a new direction (regarding “value-based” essentialism). But although the manipulation affected essentialist beliefs and essentialist beliefs were correlated with stereotyping (Studies 1, 2a, and 2b), prejudice (Studies 2a), and group emphasis (Study 3), there was no evidence that biological essentialism caused these outcomes. Given these findings, our initial research question became moot, and the present work focuses on reexamining the relationship between essentialism and bias. We discuss possible moderators, reverse causation, and third variables.


General Discussion

The present studies examined the relationship between biological essentialism and intergroup bias. As in prior work, we found that essentialist beliefs were correlated positively with stereotyping, including negative stereotyping, as well as group boundary intensification.  This positive relationship was found for essentialist thinking more generally (Studies 1, 2a, 2b, and 3) as well as specific beliefs in a biological essence (Studies 1, 2a, and 3). (New to this research, we also found similar positive correlations with beliefs in a value-based essence.) The internal meta-analysis for stereotyping confirmed a small but consistent positive relationship. Findings for prejudice were more mixed across studies consistent with more mixed findings in the prior literature even for correlational effects, but the internal meta-analysis indicated a small relationship between greater biological essentialism and less negative feelings toward the group(as in, e.g., Haslam & Levy, 2006, but see, Chen & Ratliff, 2018). 

Before conducting this research and based on the previous literature, we assumed that these correlational relationships would be due to essentialism causing intergroup bias. But although our experimental manipulations worked as designed to shift essentialist beliefs, there was no evidence that biological essentialism caused stereotyping, prejudice, or group boundary intensification.  The present studies thus suggest that a straightforward causal effect of essentialism on intergroup bias may be weaker or more complex than often described.

Thursday, November 18, 2021

Ethics Pays: Summary for Businesses

Ethicalsystems.org
September 2021

Is good ethics good for business? Crime and sleazy behavior sometimes pay off handsomely. People would not do such things if they didn’t think they were more profitable than the alternatives.

But let us make two distinctions right up front. First, let us contrast individual employees with companies. Of course, it can benefit individual employees to lie, cheat, and steal when they can get away with it. But these benefits usually come at the expense of the firm and its shareholders, so leaders and managers should work very hard to design ethical systems that will discourage such self-serving behavior (known as the “principal-agent problem”).

The harder question is whether ethical violations committed by the firm or for the firm’s benefit are profitable. Cheating customers, avoiding taxes, circumventing costly regulations, and undermining competitors can all directly increase shareholder value.

And here we must make the second distinction: short-term vs. long-term. Of course, bad ethics can be extremely profitable in the short run. Business is a complex web of relationships, and it is easy to increase revenues or decrease costs by exploiting some of those relationships. But what happens in the long run?

Customers are happy and confident in knowing they’re dealing with an honest company. Ethical companies retain the bulk of their employees for the long-term, which reduces costs associated with turnover. Investors have peace of mind when they invest in companies that display good ethics because they feel assured that their funds are protected. Good ethics keep share prices high and protect businesses from takeovers.

Culture has a tremendous influence on ethics and its application in a business setting. A corporation’s ability to deliver ethical value is dependent on the state of its culture. The culture of a company influences the moral judgment of employees and stakeholders. Companies that work to create a strong ethical culture motivate everyone to speak and act with honesty and integrity. Companies that portray strong ethics attract customers to their products and services, and are far more likely to manage their negative environmental and social externalities well.

Wednesday, November 17, 2021

False Polarization: Cognitive Mechanisms and Potential Solutions

Fernbach PM, Van Boven L
Current Opinion in Psychology
https://doi.org/10.1016/j.copsyc.2021.06.005

Abstract

While political polarization in the United States is real, intense and increasing, partisans consistently overestimate its magnitude. This “false polarization” is insidious because it reinforces actual polarization and inhibits compromise. We review empirical research on false polarization and the related phenomenon of negative meta-perceptions, and we propose three cognitive and affective processes that likely contribute to these phenomena: categorical thinking, oversimplification and emotional amplification. Finally, we review several interventions that have shown promise in mitigating these biases. 

From the Solutions Section

Another idea is to encourage citizens to engage in deeper discourse about the issues than is the norm. One way to do this is through a “consensus conference,” where people on opposing sides of issues are brought together along with topic experts to learn and discuss over the course of hours or days, with the goal of coming to an agreement. The depth of analysis cuts against the tendency to oversimplify, and the face-to-face nature diminishes categorical thinking by highlighting individuality. The challenge of consensus conferences is scalability. They are resource intensive. However, a recent study showed that simply telling people about the outcome of a consensus conference can yield some of the beneficial effects.

The amplifying effects of anger can be targeted by emotional reappraisal through the lens of sadness; People who were induced to states of sadness rather than anger exhibited lower polarization and false polarization in the context of Hurricane Katrina and a mass shooting. In another study, induced sadness increased people’s willingness to negotiate and their openness to opponents’ perspectives. Sadness reappraisals are feasible in many challenging contexts involving threat to health and security, such as the COVID-19 pandemic, that are readily interpreted as saddening or angering.

Tuesday, November 16, 2021

Decision Prioritization and Causal Reasoning in Decision Hierarchies

Zylberberg, A. (2021, September 6). 
https://doi.org/10.31234/osf.io/agt5s

Abstract

From cooking a meal to finding a route to a destination, many real life decisions can be decomposed into a hierarchy of sub-decisions. In a hierarchy, choosing which decision to think about requires planning over a potentially vast space of possible decision sequences. To gain insight into how people decide what to decide on, we studied a novel task that combines perceptual decision making, active sensing and hierarchical and counterfactual reasoning. Human participants had to find a target hidden at the lowest level of a decision tree. They could solicit information from the different nodes of the decision tree to gather noisy evidence about the target's location. Feedback was given only after errors at the leaf nodes and provided ambiguous evidence about the cause of the error. Despite the complexity of task (with 10 to 7th power latent states) participants were able to plan efficiently in the task. A computational model of this process identified a small number of heuristics of low computational complexity that accounted for human behavior. These heuristics include making categorical decisions at the branching points of the decision tree rather than carrying forward entire probability distributions, discarding sensory evidence deemed unreliable to make a choice, and using choice confidence to infer the cause of the error after an initial plan failed. Plans based on probabilistic inference or myopic sampling norms could not capture participants' behavior. Our results show that it is possible to identify hallmarks of heuristic planning with sensing in human behavior and that the use of tasks of intermediate complexity helps identify the rules underlying human ability to reason over decision hierarchies.

Discussion

Adaptive behavior requires making accurate decisions, but also knowing what decisions are worth making. To study how people decide what to decide on, we investigated a novel task in which people had to find a target, hidden at the lowest level of a decision tree, by gathering stochastic information from the internal nodes of the decision tree. Our central finding is that a small number of heuristic rules explain the participant’s behavior in this complex decision-making task. The study extends the perceptual decision framework to more complex decisions that comprise a hierarchy of sub-decisions of varying levels of difficulty, and where the decision maker has to actively decide which decision to address at any given time.  

Our task can be conceived as a sequence of binary decisions, or as one decision with eight alternatives.  Participants’ behavior supports the former interpretation.  Participants often performed multiple queries on the same node before descending levels, and they rarely made a transition from an internal node to a higher-level one before reaching a leaf node.  This indicates that participants made categorical decisions about the direction of motion at the visited nodes before they decided to descend levels. This bias toward resolving uncertainty locally was not observed in an approximately optimal policy (Fig. 8), and thus may reflect more general cognitive constraints that limit participants’ performance in our task (Markant et al., 2016). A strong candidate is the limited capacity of working memory (Miller, 1956). By reaching a categorical decision at each internal node, participants avoid the need to operate with full probability distributions over all task-relevant variables, favoring instead a strategy in which only the confidence about the motion choices is carried forward to inform future choices (Zylberberg et al., 2011).

Monday, November 15, 2021

On Defining Moral Enhancement: A Clarificatory Taxonomy

Carl Jago
Journal of Experimental Social Psychology
Volume 95, July 2021, 104145

Abstract

In a series of studies, we ask whether and to what extent the base rate of a behavior influences associated moral judgment. Previous research aimed at answering different but related questions are suggestive of such an effect. However, these other investigations involve injunctive norms and special reference groups which are inappropriate for an examination of the effects of base rates per se. Across five studies, we find that, when properly isolated, base rates do indeed influence moral judgment, but they do so with only very small effect sizes. In another study, we test the possibility that the very limited influence of base rates on moral judgment could be a result of a general phenomenon such as the fundamental attribution error, which is not specific to moral judgment. The results suggest that moral judgment may be uniquely resilient to the influence of base rates. In a final pair of studies, we test secondary hypotheses that injunctive norms and special reference groups would inflate any influence on moral judgments relative to base rates alone. The results supported those hypotheses.

From the General Discussion

In multiple experiments aimed at examining the influence of base rates per se, we found that base rates do indeed influence judgments, but the size of the effect we observed was very small. We considered that, in
discovering moral judgments’ resilience to influence from base rates, we may have only rediscovered a general tendency, such as the fundamental attribution error, whereby people discount situational factors. If
so, this tendency would then also apply broadly to non-moral scenarios. We therefore conducted another study in which our experimental materials were modified so as to remove the moral components. We found a substantial base-rate effect on participants’ judgments of performance regarding non-moral behavior. This finding suggests that the resilience to base rates observed in the preceding studies is unlikely the result of amore general tendency, and may instead be unique to moral judgment.

The main reasons why we concluded that the results from the most closely related extant research could not answer the present research question were the involvement in those studies of injunctive norms and
special reference groups. To confirm that these factors could inflate any influence of base rates on moral judgment, in the final pair of studies, we modified our experiments so as to include them. Specifically, in one study, we crossed prescriptive and proscriptive injunctive norms with high and low base rates and found that the impact of an injunctive norm outweighs any impact of the base rate. In the other study, we found that simply mentioning, for example, that there were some good people among those who engaged in a high base-rate behavior resulted in a large effect on moral judgment; not only on judgments of the target’s character, but also on judgments of blame and wrongness. 

Sunday, November 14, 2021

A brain implant that zaps away negative thoughts

Nicole Karlis
Salon.com
Originally published 14 OCT 21

Here is an excerpt:

Still, the prospect of clinicians manipulating and redirecting one's thoughts, using electricity, raises potential ethical conundrums for researchers — and philosophical conundrums for patients. 

"A person implanted with a closed-loop system to target their depressive episodes could find themselves unable to experience some depressive phenomenology when it is perfectly normal to experience this outcome, such as a funeral," said Frederic Gilbert Ph.D. Senior Lecturer in Ethics at the University of Tasmania, in an email to Salon. "A system program to administer a therapeutic response when detecting a specific biomarker will not capture faithfully the appropriateness of some context; automated invasive systems implanted in the brain might constantly step up in your decision-making . . . as a result, it might compromise you as a freely thinking agent."

Gilbert added there is the potential for misuse — and that raises novel moral questions. 

"There are potential degrees of misuse of some of the neuro-data pumping out of the brain (some believe these neuro-data may be our hidden and secretive thoughts)," Gilbert said. "The possibility of biomarking neuronal activities with AI introduces the plausibility to identify a large range of future applications (e.g. predicting aggressive outburst, addictive impulse, etc). It raises questions about the moral, legal and medical obligations to prevent foreseeable and harmful behaviour."

For these reasons, Gilbert added, it's important "at all costs" to "keep human control in the loop," in both activation and control of one's own neuro-data. 

Saturday, November 13, 2021

Moral behavior in games: A review and call for additional research

E. Clarkson
New Ideas in Psychology
Volume 64, January 2022, 100912

Abstract

The current review has been completed with several specific aims. First, it seeks to acknowledge, and detail, a new and growing body of research, that associates moral judgments with behavior in social dilemmas and economic games. Second, it seeks to address how a study of moral behavior is advantaged over past research that exclusively measured morality by asking about moral judgment or belief. In an analysis of these advantages, it is argued that additional research, that associates moral judgments with behavior, is better equipped to answer debates within the field, such as whether sacrificial judgments do reflect a concern for the greater good and if utilitarianism (or other moral theories) are better suited to solve certain collective action problems (like tragedies of the commons). To this end, future researchers should use methods that require participants to make decisions with real-world behavioral consequences.

Highlights

• Prior work has long investigated moral judgments in hypothetical scenarios.

• Arguments that debate the validity of this method are reviewed.

• New research is investigating the association between moral judgments and behavior.

• Future study should continue and broaden these investigations to new moral theories.


Friday, November 12, 2021

Supernatural punishment beliefs as cognitively compelling tools of social control

Fitouchi, L., & Singh, M. 
(2021, July 5).

Abstract

Why do humans develop beliefs in supernatural entities that punish uncooperative behaviors? Leading hypotheses maintain that these beliefs are widespread because they facilitate cooperation, allowing their groups to outcompete others in inter-group competition. Focusing on within-group interactions, we present a model in which people strategically endorse supernatural punishment beliefs to manipulate others into cooperating. Others accept these beliefs, meanwhile, because they are made compelling by various cognitive biases: They appear to provide information about why misfortune occurs; they appeal to intuitions about immanent justice; they contain threatening information; and they allow believers to signal their trustworthiness. Explaining supernatural beliefs requires considering both motivations to invest in their endorsement and the reasons others adopt them.

Conclusions

Unlike previous accounts, our model is agnostic to whether supernatural punishment beliefs cause people to behave cooperatively. Many cultural traits, from shamanism to rain magic to divination, remain stable as long as people see them—potentially wrongly—as useful for achieving their goals. Prosocial supernatural beliefs, we argue, are no different. People endorse them to motivate others to be cooperative. Their interaction partners accept these beliefs, meanwhile, because they are cognitively compelling and socially useful.Supernatural punishment beliefs, like so many cultural products, are shaped by people’s psychological biases and strategic goals

Thursday, November 11, 2021

Revisiting the Social Origins of Human Morality: A Constructivist Perspective on the Nature of Moral Sense-Making

Segovia-Cuéllar, A. 
Topoi (2021). 

Abstract

A recent turn in the cognitive sciences has deepened the attention on embodied and situated dynamics for explaining different cognitive processes such as perception, emotion, and social cognition. This has fostered an extensive interest in the social and ‘intersubjective’ nature of moral behavior, especially from the perspective of enactivism. In this paper, I argue that embodied and situated perspectives, enactivism in particular, nonetheless require further improvements with regards to their analysis of the social nature of human morality. In brief, enactivist proposals still do not define what features of the social-relational context, or which kind of processes within social interactions, make an evaluation or action morally relevant or distinctive from other types of social normativity. As an alternative to this proclivity, and seeking to complement the enactive perspective, I present a definition of the process of moral sense-making and offer an empirically-based ethical distinction between different domains of social knowledge in moral development. For doing so, I take insights from the constructivist tradition in moral psychology. My objective is not to radically oppose embodied and enactive alternatives but to expand the horizon of their conceptual and empirical contributions to morality research.

From the Conclusions

To sum up, for humans to think morally in social environments it is necessary to develop a capacity to recognize morally relevant scenarios, to identify moral transgressions, to feel concerned about morally divergent issues, and to make judgments and decisions with morally relevant consequences. Our moral life involves the flexible application of moral principles since concerns about welfare, justice, and rights are sensitive and contingent on social and contextual factors. Moral motivation and reasoning are situated and embedded phenomena, and the result of a very complex developmental process.

In this paper, I have argued that embodied perspectives, enactivism included, face important challenges that result from their analysis of the social origins of human morality. My main objective has been to expand the horizon of conceptual, empirical, and descriptive implications that they need to address in the construction of a coherent ethical perspective. I have done so by exposing a constructivist approach to the social origins of human morality, taking insights from the cognitive-evolutionary tradition in moral psychology. This alternative radically eschews dichotomies to explain human moral behavior. Moreover, based on the constructivist definition of the moral domain of social knowledge, I have offered a basic notion of moral sense-making and I have called attention to the relevance of distinguishing what makes the development of moral norms different from the development of other domains of social normativity.

Wednesday, November 10, 2021

Confederate monuments and the history of lynching in the American South: An empirical examination

Kyshia Henderson, et al.
Proceedings of the National Academy of Sciences 
Oct 2021, 118 (42) e2103519118
DOI: 10.1073/pnas.2103519118

Significance

The fight over Confederate monuments has fueled lawsuits, protests, counterprotests, arrests, even terrorism, as we painfully saw in August 2017 in Charlottesville, VA. The fight rests on a debate over whether these monuments represent racism (“hate”) or something ostensibly devoid of racism (“heritage,” “Southern pride”). Herein, we show that Confederate monuments are tied to a history of racial violence. Specifically, we find that the number of lynching victims in a county is a positive and significant predictor of Confederate memorializations in that county, even after controlling for relevant covariates. This finding provides concrete, quantitative, historically and geographically situated evidence consistent with the position that Confederate memorializations reflect a racist history, marred by intentions to terrorize and intimidate Black Americans.

Abstract

The present work interrogates the history of Confederate memorializations by examining the relationship between these memorializations and lynching, an explicitly racist act of violence. We obtained and merged data on Confederate memorializations at the county level and lynching victims, also at the county level. We find that the number of lynching victims in a county is a positive and significant predictor of the number of Confederate memorializations in that county, even after controlling for relevant covariates. This finding provides concrete, quantitative, and historically and geographically situated evidence consistent with the position that Confederate memorializations reflect a racist history, one marred by intentions to terrorize and intimidate Black Americans in response to Black progress.

From the Discussion

Activists have long argued that Confederate memorializations are hateful, that they represent violence and intimidation, and that they are racist. In 2015, after scaling a flagpole at the South Carolina State House to remove the Confederate flag, activist Bree Newsome wrote in a statement, “It’s the banner of racial
intimidation and fear ... a reminder how, for centuries, the oppressive status quo has been undergirded by white supremacist violence with the tacit approval of too many political leaders”.  Similarly, activist De’Ivyion Drew, in response to the University of North Carolina at Chapel Hill’s making a deal with the Sons of Confederate Veterans to keep a monument on campus, stated, “Not only is UNC actively emboldening white supremacy through giving monetary support to them, but they’re also giving them the power with the statue to harm communities of color in the state”. Both Newsome and Drew call to the symbols’ racist and harmful associations, and the current data are consistent with these claims. In the present work, we find that county-level frequency of lynching predicts county-level frequency of Confederate memorializations. Statistically linking lynching, a recognized form of racial oppression intended to maintain White supremacy and suppress civil rights for Black Americans, with Confederate symbols provides compelling evidence that these symbols are associated with hate, and intentionally so.

Tuesday, November 9, 2021

Louisiana woman learns WWII vet husband’s cadaver dissected at pay-per-view event

Peter Aitken
YahooNews.com
Originally published 7 NOV 21

The family of a deceased Louisiana man found out that his body ended up in a ticketed live human dissection as part of a traveling expo.

David Saunders, a World War II and Korean War veteran who lived in Baker, died at the age of 98 from COVID-19 complications in August. His family donated his remains to science – or so they thought: Instead, his wife, Elsie Saunders, discovered that his body had ended up in an "Oddities and Curiosities Expo" in Oregon.

The expo, organized by DeathScience.org, was set up at the Portland Marriot Downtown Waterfront. People could watch a live human dissection on Oct. 17 for the cost of up to $500 a seat, KING-TV reported.

"From the external body exam to the removal of vital organs including the brain, we will find new perspectives on how the human body can tell a story," an online event description says. "There will be several opportunities for attendees to get an up-close and personal look at the cadaver."

The Seattle-based station sent an undercover reporter to the expo and noted David Saunders’ name on a bracelet he was wearing. The reporter was able to contact Elsie Saunders and let her know what had happened.

She was, understandably, horrified.

"It’s horrible what has happened to my husband," Elsie Saunders told NBC News. "I didn’t know he was going to be … put on display like a performing bear or something. I only consented to body donation or scientific purposes."

"That’s the way my husband wanted it," she explained. "To say the least, I’m upset."

Monday, November 8, 2021

What the mind is

B. F. Malle
Nature - Human Behaviour
Originally published 26 Aug 21

Humans have a conception of what the mind is. This conception takes mind to be a set of capacities, such as the ability to be proud or feel sad, to remember or to plan. Such a multifaceted conception allows people to ascribe mind in varying degrees to humans, animals, robots or any other entity1,2. However, systematic research on this conception of mind has so far been limited to Western populations. A study by Weisman and colleagues3 published in Nature Human Behaviour now provides compelling evidence for some cross-cultural universals in the human understanding of what the mind is, as well as revealing intercultural variation.

(cut)

As with all new findings, readers must be alert and cautious in the conclusions they draw. We may not conclude with certainty that these are the three definitive dimensions of human mind perception, because the 23 mental capacities featured in the study were not exhaustive; in particular, they did not encompass two important domains — morality and social cognition. Moral capacities are central to social relations, person perception and identity; likewise, people care deeply about the capacity to empathize and understand others’ thoughts and feelings. Yet the present study lacked items to capture these domains. When items for moral and social–cognitive capacities have been included in past US studies, they formed a strong separate dimension, while emotions shifted toward the Experience dimension. 

Incorporating moral–social capacities in future studies may strengthen the authors’ findings. Morality and social cognition are credible candidates for cultural universals, so their inclusion could make cross-cultural stability of mind perception even more decisive. Moreover, inclusion of these important mental capacities might clarify one noteworthy cultural divergence in the data: the fact that adults in Ghana and Vanuatu combined the emotional and perceptual-cognitive dimensions. Without the contrast to social–moral capacities, emotion and cognition might have been similar enough to move toward each other. Including social–moral capacities in future studies could provide a contrasting and dividing line, which would pull emotion and cognition apart. The results might, potentially, be even more consistent across cultures.

Sunday, November 7, 2021

Moral Judgment as Categorization

McHugh, C., McGann, M., Igou, E. R., & 
Kinsella, E. L. (2021). 
Perspectives on Psychological Science 
https://doi.org/10.1177/1745691621990636

Abstract

Observed variability and complexity of judgments of "right" and "wrong" cannot be readily accounted for within extant approaches to understanding moral judgment. In response to this challenge, we present a novel perspective on categorization in moral judgment. Moral judgment as categorization (MJAC) incorporates principles of category formation research while addressing key challenges of existing approaches to moral judgment. People develop skills in making context-relevant categorizations. They learn that various objects (events, behaviors, people, etc.) can be categorized as morally right or wrong. Repetition and rehearsal result in reliable, habitualized categorizations. According to this skill-formation account of moral categorization, the learning and the habitualization of the forming of moral categories occur within goal-directed activity that is sensitive to various contextual influences. By allowing for the complexity of moral judgments, MJAC offers greater explanatory power than existing approaches while also providing opportunities for a diverse range of new research questions.

Summarizing the Differences 

Between MJAC and Existing Approaches Above, we have outlined how MJAC differs from existing theories in terms of assumptions and explanation. These theories make assumptions based on content, and this results in essentialist theorizing, either implicit or explicit attempts to define an “essence” of morality. In contrast, MJAC rejects essentialism, instead assuming moral categorizations are dynamical, context-dependent, and occurring as part of goal-directed activity. Each of the theories discussed is explicitly or implicitly (e.g., Schein & Gray, 2018, p. 41) based on dual-process assumptions, with related dichotomous assumptions regarding the cognitive mechanisms (where these mechanisms are specified). MJAC does not assume distinct, separable processes, adopting type-token interpretation, occurring as part of goal-directed activity (Barsalou, 2003, 2017), as the mechanism that underlies moral categorization. These differences in assumptions underlie the differences in the explanation discussed above.

Saturday, November 6, 2021

Generating Options and Choosing Between Them Depend on Distinct Forms of Value Representation

Morris, A., Phillips, J., Huang, K., & 
Cushman, F. (2021). 
Psychological Science. 
https://doi.org/10.1177/09567976211005702

Abstract

Humans have a remarkable capacity for flexible decision-making, deliberating among actions by modeling their likely outcomes. This capacity allows us to adapt to the specific features of diverse circumstances. In real-world decision-making, however, people face an important challenge: There are often an enormous number of possibilities to choose among, far too many for exhaustive consideration. There is a crucial, understudied prechoice step in which, among myriad possibilities, a few good candidates come quickly to mind. How do people accomplish this? We show across nine experiments (N = 3,972 U.S. residents) that people use computationally frugal cached value estimates to propose a few candidate actions on the basis of their success in past contexts (even when irrelevant for the current context). Deliberative planning is then deployed just within this set, allowing people to compute more accurate values on the basis of context-specific criteria. This hybrid architecture illuminates how typically valuable thoughts come quickly to mind during decision-making.

From the General Discussion

Salience effects, such as recency, frequency of consideration, and extremity, likely also contribute to consideration (Kahneman, 2003; Tversky & Kahneman, 1973). Our results supported at least one salience effect: In Studies 4 through 6, in addition to our primary effect of high cached value, options with more extreme cached values relative to the mean also tended to come to mind (see the checkmark shape in Fig. 3d). Salience effects such as this may have a functional basis, such as conserving scarce cognitive resources (Lieder et al., 2018). An ideal general theory would specify how these diverse factors—including many others, such as personality traits, social roles, and cultural norms (Smaldino & Richerson, 2012)—form a coherent, adaptive design for option generation.

A growing body of work suggests that value influences what comes to mind not only during decision-making but also in many other contexts, such as causal reasoning, moral judgment, and memory recall (Bear & Knobe, 2017; Braun et al., 2018; Hitchcock & Knobe, 2009; Mattar & Daw, 2018; Phillips et al., 2019). A key inquiry going forward will be the role of cached versus context-specific value estimation in these cases.

Friday, November 5, 2021

Invisible gorillas in the mind: Internal inattentional blindness and the prospect of introspection training

Morris, A. (2021, September 26).

Abstract

Much of high-level cognition appears inaccessible to consciousness. Countless studies have revealed mental processes -- like those underlying our choices, beliefs, judgments, intuitions, etc. -- which people do not notice or report, and these findings have had a widespread influence on the theory and application of psychological science. However, the interpretation of these findings is uncertain. Making an analogy to perceptual consciousness research, I argue that much of the unconsciousness of high-level cognition is plausibly due to internal inattentional blindness: missing an otherwise consciously-accessible internal event because your attention was elsewhere. In other words, rather than being structurally unconscious, many higher mental processes might instead be "preconscious", and would become conscious if a person attended to them. I synthesize existing indirect evidence for this claim, argue that it is a foundational and largely untested assumption in many applied interventions (such as therapy and mindfulness practices), and suggest that, with careful experimentation, it could form the basis for a long-sought-after science of introspection training.

Conclusion

Just as people can miss perceptual events due to external inattention, so may they be blind to internal events – like those constituting high-level mental processes – due to internal inattention. The existence of internal inattentional blindness, and the possibility of overcoming it through training, are widely assumed in successful applied psychological practices and widely reported by practitioners; yet these possibilities have rarely been explored experimentally, or taken seriously by basic theorists. Rigorously demonstrating the existence of IIB could open a new chapter both in the development of psychological interventions, and in our understanding of the scope of conscious awareness.


Attention Therapists: Some very relevant information here.

Thursday, November 4, 2021

The AMA needs to declare a national mental health emergency

Susan Hata and Thalia Krakower
STAT News
Originally published 6 OCT 21

As the pandemic continues to disrupt life across the U.S., a staggering number of Americans are reaching out to their primary care doctors for help with sometimes overwhelming mental health struggles. Yet primary care doctors like us have nowhere to turn when it comes to finding mental health providers for them, and our patients often suffer without the specialty care they need.

It’s time for the American Medical Association to take decisive action and declare a national mental health emergency.

More than 40% of Americans report symptoms of anxiety or depression, and emergency rooms are flooded with patients in psychiatric crises. Untreated, these issues can have devastating consequences. In 2020, an estimated 44,800 Americans lost their lives to suicide; among children ages 10 to 14, suicide is the second leading cause of death.

Finding mental health providers for patients is an uphill climb, in part because there is no centralized process for it. Timely mental health services are astonishingly difficult to obtain even in Massachusetts, where we live and work, which has the most psychologists per capita. Waitlists for therapists can be longer than six months for adults, and even longer for children.

(cut)

By declaring a mental health emergency, the AMA could galvanize health administrators and drive the innovation needed to improve the existing mental health system. When Covid-19 was named a pandemic, the U.S. health care infrastructure adapted quickly to manage the deluge of infections. Leaders nimbly and creatively mobilized resources. They redeployed staff, built field hospitals and overflow ICUs, and deferred surgeries and routine care to preserve resources and minimize hospital-based transmission of Covid-19. With proper framing and a sense of urgency, similar things can happen for the mental health care system.

To be clear, all of this is the AMA’s lane: In addition to the devastating toll of suicides and overdoses, untreated mental illness worsens cardiac outcomes, increases mortality from Covid-19, and shortens life spans. Adult mental illness also directly affects the health of children, leading to poor health outcomes across generations.

Wednesday, November 3, 2021

Maybe a free thinker but not a critical one: High conspiracy belief is associated with low critical thinking ability

Lantian, A., Bagneux, V., Delouvée, S., 
& Gauvrit, N. (2020, February 7). 
Applied Cognitive Psychology
https://doi.org/10.31234/osf.io/8qhx4

Abstract

Critical thinking is of paramount importance in our society. People regularly assume that critical thinking is a way to reduce conspiracy belief, although the relationship between critical thinking and conspiracy belief has never been tested. We conducted two studies (Study 1, N = 86; Study 2, N = 252), in which we found that critical thinking ability—measured by an open-ended test emphasizing several areas of critical thinking ability in the context of argumentation—is negatively associated with belief in conspiracy theories. Additionally, we did not find a significant relationship between self-reported (subjective) critical thinking ability and conspiracy belief. Our results support the idea that conspiracy believers have less developed critical thinking ability and stimulate discussion about the possibility of reducing conspiracy beliefs via the development of critical thinking.

From the General Discussion

The presumed role of critical thinking in belief in conspiracy theories is continuously discussed by researchers, journalists, and by lay people on social networks. One example is the capacity to exercise critical thinking ability to distinguish bogus conspiracy theories from genuine conspiracy theories (Bale, 2007), leading us to question when critical thinking ability could be used to support this adaptive function. Sometimes, it is not unreasonable to think that a form of rationality would help to facilitate the detection of dangerous coalitions (van Prooijen & Van Vugt, 2018). In that respect, Stojanov and Halberstadt (2019) recently introduced a distinction between irrational versus rational suspicion. Although the former focuses on the general tendency to believe in any conspiracy theories, the later focus on higher sensitivity to deception or corruption, which is defined as“healthy skepticism.” These two aspects of suspicion can now be handled simultaneously thanks to a new scale developed by Stojanov and Halberstadt (2019). In our study, we found that critical thinking ability was associated with lower unfounded belief in conspiracy theories, but this does not answer the question as to whether critical thinking ability can be helpful for the detection of true conspiracies. Future studies could use this new measurement to address this specific question.