Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Cognition. Show all posts
Showing posts with label Cognition. Show all posts

Wednesday, June 30, 2021

Extortion, intuition, and the dark side of reciprocity

Bernhard, R., & Cushman, F. A. 
(2021, April 22). 
https://doi.org/10.31234/osf.io/kycwa

Abstract

Extortion occurs when one person uses some combination of threats and promises to extract an unfair share of benefits from another. Although extortion is a pervasive feature of human interaction, it has received relatively little attention in psychological research. To this end, we begin by observing that extortion is structured quite similarly to far better-studied “reciprocal” social behaviors, such as conditional cooperation and retributive punishment. All of these strategies are designed to elicit some desirable behavior from a social partner, and do so by constructing conditional incentives; the main difference is that the desired behavioral response is an unfair or unjust allocation of resources during extortion, whereas it is often a fair or just distribution of resources for reciprocal cooperation and punishment. Thus, we conjecture, a common set of psychological mechanisms may render these strategies successful. We know from prior work that prosocial forms of reciprocity often work best when implemented inflexibly and intuitively, rather than deliberatively. This both affords long-term commitment to the reciprocal strategy, and also signals this commitment to social partners. We argue that, for the same reasons, extortion is likely to depend largely upon inflexible, intuitive psychological processes. Several existing lines of circumstantial evidence support this conjecture.

From the Conclusion

An essential part of our analysis is to characterize strategies, rather than individual behaviors, as “prosocial” or “antisocial”.  Extortionate strategies can be  implemented by behaviors that “help” (as  in  the case of a manager who gives promotions to those who work uncompensated hours), while prosocial strategies can be implemented by behaviors that harm (as in the case of the CEO who finds out and reprimands this manager).   This manner of thinking at the level of strategies, rather than behavior, invites a broader realignment of our perspective on the relationship between intuition and social behavior. If our focus were on individual behaviors, we might have posed  the question, “Does intuition support cooperation or defection?”.  Framed  this way,  the recent literature could be taken to suggest the answer is “cooperation”—and,  therefore, that intuition promotes prosociality. Surely this is often true, but we suggest that intuitive cooperation can also serve antisocial ends. Meanwhile, as we have emphasized, a prosocial strategy such as TFT  may  benefit  from intuitive (reciprocal) defection. Quickly, the question, “Does intuition support cooperation or defection?”—and  any  implied  relationship to the question  “Does intuition support prosocial or antisocial behavior?”—begins to look ill-posed.

Tuesday, June 29, 2021

What Matters for Moral Status: Behavioural or Cognitive Equivalence?

John Danaher
Cambridge Quarterly Review of Healthcare Ethics
2021 Jul;30(3):472-478.

Abstract

Henry Shevlin’s paper—“How could we know when a robot was a moral patient?” – argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the “behavioral equivalence” strategy that I have defended in previous work but argues that it is flawed in crucial respects. Unfortunately—and I guess this is hardly surprising—I cannot bring myself to agree that the cognitive equivalence strategy is the superior one. In this article, I try to explain why in three steps. First, I clarify the nature of the question that I take both myself and Shevlin to be answering. Second, I clear up some potential confusions about the behavioral equivalence strategy, addressing some other recent criticisms of it. Third, I will explain why I still favor the behavioral equivalence strategy over the cognitive equivalence one.

(cut)

The second problem is more fundamental and may get to the heart of the disagreement between myself and Shevlin. The problem is that Shevlin seems to think that behavioural evidence and cognitive evidence are separable. I do not think that they are. After all, cognitive architectures do not speak for themselves. They speak through behaviour. The human cognitive architecture, for example, is not that differentiated at a biological level, particularly at the cortical level. You would be hard pressed to work out the cognitive function of different brain regions just by staring at MRI scans and microscopic slices of neural tissue. You need behavioural evidence to tell you what the cognitive architecture does.  This is what has happened repeatedly in the history of neuro- and cognitive science. So, for example, we find that people with damage to particular regions of the brain exhibit some odd behaviours (lack of long term memory formation; irritability and impulsiveness; language deficits; and so on). We then use this behavioural evidence to build up a functional map of the cognitive architecture. If the map is detailed enough, someone might be able to infer certain psychological or mental states from patterns of activity in the cognitive architecture, but this is only because we first used behaviour to build up the functional map.

Wednesday, June 2, 2021

The clockwork universe: is free will an illusion?

Oliver Burkeman
The Guardian
Originally posted 27 APR 21

Here is an excerpt:

And Saul Smilansky, a professor of philosophy at the University of Haifa in Israel, who believes the popular notion of free will is a mistake, told me that if a graduate student who was prone to depression sought to study the subject with him, he would try to dissuade them. “Look, I’m naturally a buoyant person,” he said. “I have the mentality of a village idiot: it’s easy to make me happy. Nevertheless, the free will problem is really depressing if you take it seriously. It hasn’t made me happy, and in retrospect, if I were at graduate school again, maybe a different topic would have been preferable.”

Smilansky is an advocate of what he calls “illusionism”, the idea that although free will as conventionally defined is unreal, it’s crucial people go on believing otherwise – from which it follows that an article like this one might be actively dangerous. (Twenty years ago, he said, he might have refused to speak to me, but these days free will scepticism was so widely discussed that “the horse has left the barn”.) “On the deepest level, if people really understood what’s going on – and I don’t think I’ve fully internalised the implications myself, even after all these years – it’s just too frightening and difficult,” Smilansky said. “For anyone who’s morally and emotionally deep, it’s really depressing and destructive. It would really threaten our sense of self, our sense of personal value. The truth is just too awful here.”

(cut)

By far the most unsettling implication of the case against free will, for most who encounter it, is what is seems to say about morality: that nobody, ever, truly deserves reward or punishment for what they do, because what they do is the result of blind deterministic forces (plus maybe a little quantum randomness).  "For the free will sceptic," writes Gregg Caruso in his new book Just Deserts, a collection of dialogues with fellow philosopher Daniel Dennett, "it is never fair to treat anyone as morally responsible." Were we to accept the full implications of that idea, the way we treat each other - and especially the way we treat criminals - might change beyond recognition.

Monday, May 31, 2021

Disgust Can Be Morally Valuable

Charlie Kurth
Scientific American
Originally posted 9 May 21

Here is no an excerpt:

Let’s start by considering disgust’s virtues. Not only do we tend to experience disgust toward moral wrongs like hypocrisy and exploitation, but the shunning and social excluding that disgust brings seems a fitting response to those who pollute the moral fabric in these ways. Moreover, in the face of worries about morally problematic disgust—disgust felt at the wrong time or in the wrong way—advocates respond that it’s an emotion we can substantively change for the better.

On this front, disgust’s advocates point to exposure and habituation; just like I might overcome the disgust I feel about exotic foods by trying them, I can overcome the disgust I feel about same-sex marriage by spending more time with gay couples. Moreover, work in psychology appears to support this picture. Medical school students, for instance, lose their disgust about touching dead bodies after a few months of dissecting corpses, and new mothers quickly become less disgusted by the smell of soiled diapers.

But these findings may be deceptive. For starters, when we look more closely at the results of the diaper experiment, we see that a mother’s reduced disgust sensitivity is most pronounced with regard to her own baby’s diapers, and additional research indicates that mothers have a general preference for the smell of their own children. This combination suggests, contra the disgust advocates, that a mother’s disgust is not being eliminated. Rather, her disgust at the soiled diapers is still there; it’s just being masked by the positive feelings that she’s getting from the smell of her newborn. Similarly, when we look carefully at the cadaver study, we see that while the disgust of medical students toward touching the cold bodies of the dissection lab is reduced with exposure, the disgust they feel toward touching the warm bodies of the recently deceased remained unchanged.

Saturday, May 29, 2021

Comparisons Inform Me Who I Am: A General Comparative-Processing Model of Self-Perception

Morina N.
Perspectives on Psychological Science. 
February 2021. 
doi:10.1177/1745691620966788
 
Abstract

People’s self-concept contributes to their sense of identity over time. Yet self-perception is motivated and serves survival and thus does not reflect stable inner states or accurate biographical accounts. Research indicates that different types of comparison standards act as reference frames in evaluating attributes that constitute the self. However, the role of comparisons in self-perception has been underestimated, arguably because of lack of a guiding framework that takes into account relevant aspects of comparison processes and their interdependence. I propose a general comparative model of self-perception that consists of a basic comparison process involving the individual’s prior mental representation of the target dimension, the construal of the comparison standard, and the comparison outcome representing the posterior representation of the target dimension. The generated dimensional construal is then appraised with respect to one’s motives and controllability and goes on to shape emotional, cognitive, and behavioral responses. Contextual and personal factors influence the comparison process. This model may be informative in better understanding comparison processes in people’s everyday lives and their role in shaping self-perception and in designing interventions to assist people overcome undesirable consequences of comparative behavior.

Concluding Remarks

Comparisons inform people about their current selves and their progress toward end goals. Comparative evaluations are omnipresent in everyday life, appear both unintentionally and intentionally, and are context sensitive. The current framework defines comparison as a dynamic process consisting of several subcomponents. The segmentation of the subcomponent processes into activation of comparison, basic comparison process, valuation, as well as emotional, cognitive, and behavioral responses is not rigid; however, the taxonomy should prove conceptually useful because it breaks down the comparison process into testable constituent subprocesses. A better understanding of comparative behavior processes will enhance the knowledge of self-perception and help identify effective strategies that promote more adaptive comparisons.

Friday, May 28, 2021

‘Belonging Is Stronger Than Facts’: The Age of Misinformation

Max Fisher
The New York Times
Originally published 7 May 21

Hereis an excerpt:

We are in an era of endemic misinformation — and outright disinformation. Plenty of bad actors are helping the trend along. 

But the real drivers, some experts believe, are social and psychological forces that make people prone to sharing and believing misinformation in the first place. And those forces are on the rise.

“Why are misperceptions about contentious issues in politics and science seemingly so persistent and difficult to correct?” Brendan Nyhan, a Dartmouth College political scientist, posed in a new paper in Proceedings of the National Academy of Sciences.

It’s not for want of good information, which is ubiquitous. Exposure to good information does not reliably instill accurate beliefs anyway. Rather, Dr. Nyhan writes, a growing body of evidence suggests that the ultimate culprits are “cognitive and memory limitations, directional motivations to defend or support some group identity or existing belief, and messages from other people and political elites.”

Put more simply, people become more prone to misinformation when three things happen. First, and perhaps most important, is when conditions in society make people feel a greater need for what social scientists call ingrouping — a belief that their social identity is a source of strength and superiority, and that other groups can be blamed for their problems.

As much as we like to think of ourselves as rational beings who put truth-seeking above all else, we are social animals wired for survival. In times of perceived conflict or social change, we seek security in groups. And that makes us eager to consume information, true or not, that lets us see the world as a conflict putting our righteous ingroup against a nefarious outgroup.

This need can emerge especially out of a sense of social destabilization. As a result, misinformation is often prevalent among communities that feel destabilized by unwanted change or, in the case of some minorities, powerless in the face of dominant forces.

Framing everything as a grand conflict against scheming enemies can feel enormously reassuring. And that’s why perhaps the greatest culprit of our era of misinformation may be, more than any one particular misinformer, the era-defining rise in social polarization.

“At the mass level, greater partisan divisions in social identity are generating intense hostility toward opposition partisans,” which has “seemingly increased the political system’s vulnerability to partisan misinformation,” Dr. Nyhan wrote in an earlier paper.

Saturday, May 22, 2021

A normative account of self-deception, overconfidence, and paranoia

Rossi-Goldthorpe, R., Leong, et al.
(2021, April 12).
https://doi.org/10.31234/osf.io/9fkb5

Abstract

Self-deception, paranoia, and overconfidence involve misbeliefs about self, others, and world. They are often considered mistaken. Here we explore whether they might be adaptive, and further, whether they might be explicable in normative Bayesian terms. We administered a difficult perceptual judgment task with and without social influence (suggestions from a cooperating or competing partner). Crucially, the social influence was uninformative. We found that participants heeded the suggestions most under the most uncertain conditions and that they did so with high confidence, particularly if they were more paranoid. Model fitting to participant behavior revealed that their prior beliefs changed depending on whether the partner was a collaborator or competitor, however, those beliefs did not differ as a function of paranoia. Instead, paranoia, self-deception, and overconfidence were associated with participants’ perceived instability of their own performance. These data are consistent with the idea that self-deception, paranoia, and overconfidence flourish under uncertainty, and have their roots in low self-esteem, rather than excessive social concern. The normative model suggests that spurious beliefs can have value – self-deception is irrational yet can facilitate optimal behavior. This occurs even at the expense of monetary rewards, perhaps explaining why self-deception and paranoia contribute to costly decisions which can spark financial crashes and costly wars.

Wednesday, May 19, 2021

Population ethical intuitions

Caviola, L., Althaus, D., Mogensen, A., 
& Goodwin, G. (2021, April 1). 

Abstract

We investigated lay people’s population ethical intuitions (N = 4,374), i.e., their moral evaluations of populations that differ in size and composition. First, we found that people place greater relative weight on, and are more sensitive to, suffering compared to happiness. Participants, on average, believed that more happy people are needed to outweigh a given amount of unhappy people in a population (Studies 1a-c). Second, we found that—in contrast to so-called person-affecting views—people do not consider the creation of new people as morally neutral. Participants considered it good to create a new happy person and bad to create a new unhappy person (Study 2). Third, we found that people take into account both the average level (averagism) and the total level (totalism) of happiness when evaluating populations. Participants preferred populations with greater total happiness levels when the average level remained constant (Study 3) and populations with greater average happiness levels when the total level remained constant (Study 4). When the two principles were in conflict, participants’ preferences lay in between the recommendations of the two principles, suggesting that both are applied simultaneously (Study 5). In certain cases, participants even showed averagist preferences when averagism disfavors adding more happy people and favors adding more unhappy people to a population (Study 6). However, when participants were prompted to reflect as opposed to rely on their intuitions, their preferences became more totalist (Studies 5-6). Our findings have implications for moral psychology, philosophy and policy making.

From the Discussion

Suffering is more bad than than happiness is good

We found that people weigh suffering more than happiness when they evaluate the goodness of populations consisting of both happy and unhappy people. Thus, people are neither following strict negative utilitarianism (minimizing suffering, giving no weight to maximizing happiness at all) nor strict classical utilitarianism (minimizing suffering and maximizing happiness, weighing both equally). Instead, the average person’s intuitions seem to track a mixture of these two theories. In Studies 1a-c, participants on average believed that approximately 1.5-3 times more happy people are required to outweigh a given amount of unhappy people. The precise trade ratio between happiness and suffering depended on the intensity levels of happiness and suffering. (In additional preliminary studies, we found that the trade ratio can also heavily depend on the framing of the question.) Study 1c clarified that, on average, participants continued to believe that more happiness was needed to outweigh suffering even when the happiness and suffering units were exactly equally intense. This suggests that people generally weigh suffering more than happiness in their moral assessments above and beyond perceiving suffering to be more intense than happiness. However, our studies also made clear that there are individual differences and that a substantial proportion of participants weighed happiness and suffering equally strongly, in line with classical utilitarianism.

Thursday, April 29, 2021

Why evolutionary psychology should abandon modularity

Pietraszewski, D., & Wertz, A. E. 
(2021, March 29).
https://doi.org/10.1177/1745691621997113

Abstract

A debate surrounding modularity—the notion that the mind may be exclusively composed of distinct systems or modules—has held philosophers and psychologists captive for nearly forty years. Concern about this thesis—which has come to be known as the massive modularity debate—serves as the primary grounds for skepticism of evolutionary psychology’s claims about the mind. Here we will suggest that the entirety of this debate, and the very notion of massive modularity itself, is ill-posed and confused. In particular, it is based on a confusion about the level of analysis (or reduction) at which one is approaching the mind. Here, we will provide a framework for clarifying at what level of analysis one is approaching the mind, and explain how a systemic failure to distinguish between different levels of analysis has led to profound misunderstandings of not only evolutionary psychology, but also of the entire cognitivist enterprise of approaching the mind at the level of mechanism. We will furthermore suggest that confusions between different levels of analysis are endemic throughout the psychological sciences—extending well beyond issues of modularity and evolutionary psychology. Therefore, researchers in all areas should take preventative measures to avoid this confusion in the future.

Conclusion

What has seemed to be an important but interminable debate about the nature of (massive) modularity is better conceptualized as the modularity mistake.  Clarifying the level of analys is at which one is operating will not only resolve the debate, but render it moot.  In its stead, researchers will be free to pursue much simpler, clearer, and more profound questions about how the mind works. If we proceed as usual, we will end up back in the same confused place where we started in another 40 years —arguing once again about who’s on first. Confusing or collapsing across different levels of analysis is not just a problem for modularity and evolutionary psychology.  Rather, it is the greatest problem facing early-21st-century psychology, dwarfing even the current replication crisis. Since at least the days of the neobehaviorists (e.g. Tolman, 1964), the ontology of the intentional level has become mingled with the functional level in all areas of the cognitive sciences (see Stich, 1986). Constructs such as thinking, reasoning, effort, intuition, deliberation, automaticity, and consciousness have become misunderstood and misused as functional level descriptions of how the mind works.  Appeals to  a central agency who uses “their” memory, attention, reasoning, and soon have become commonplace and unremarkable. Even the concept of cognition itself has fallen into the same levels of analysis confusion seen in the modularity mistake.  In the process, a shared notion of what it means to provide a coherent functional level (or mechanistic) description of the mind has been lost.

We do not bring up these broader issues to resolve them here.  Rather, we wish to emphasize what is at stake when it comes to being clear about levels of analysis.  If we do not respect the distinctions between levels, no amount of hard work, nor mountains of data that we will ever collect will resolve the problems created by conflating them.  The only question is whether or not we are willing to begin the slow, difficult — but ultimately clarifying and redeeming — process of unconfounding the intentional and functional levels of analysis. The modularity mistake is as good a place as any to start.

Friday, April 2, 2021

Neuroscience shows how interconnected we are – even in a time of isolation

Lisa Feldman Barrett
The Guardian
Originally posted 10 Feb 21

Here is an excerpt:

Being the caretakers of each other’s body budgets is challenging when so many of us feel lonely or are physically alone. But social distancing doesn’t have to mean social isolation. Humans have a special power to connect with and regulate each other in another way, even at a distance: with words. If you’ve ever received a text message from a loved one and felt a rush of warmth, or been criticised by your boss and felt like you’d been punched in the gut, you know what I’m talking about. Words are tools for regulating bodies.

In my research lab, we run experiments to demonstrate this power of words. Our participants lie still in a brain scanner and listen to evocative descriptions of different situations. One is about walking into your childhood home and being smothered in hugs and smiles. Another is about awakening to your buzzing alarm clock and finding a sweet note from your significant other. As they listen, we see increased activity in brain regions that control heart rate, breathing, metabolism and the immune system. Yes, the same brain regions that process language also help to run your body budget. Words have power over your biology – your brain wiring guarantees it.

Our participants also had increased activity in brain regions involved in vision and movement, even though they were lying still with their eyes closed. Their brains were changing the firing of their own neurons to simulate sight and motion in their mind’s eye. This same ability can build a sense of connection, from a few seconds of poor-quality mobile phone audio, or from a rectangle of pixels in the shape of a friend’s face. Your brain fills in the gaps – the sense data that you don’t receive through these media – and can ease your body budget deficit in the moment.

In the midst of social distancing, my Zoom friend and I rediscovered the body-budgeting benefits of older means of communication, such as letter writing. The handwriting of someone we care about can have an unexpected emotional impact. A piece of paper becomes a wave of love, a flood of gratitude, a belly-aching laugh.

Monday, March 29, 2021

The problem with prediction

Joseph Fridman
aeon.com
Originally published 25 Jan 21

Here is an excerpt:

Today, many neuroscientists exploring the predictive brain deploy contemporary economics as a similar sort of explanatory heuristic. Scientists have come a long way in understanding how ‘spending metabolic money to build complex brains pays dividends in the search for adaptive success’, remarks the philosopher Andy Clark, in a notable review of the predictive brain. The idea of the predictive brain makes sense because it is profitable, metabolically speaking. Similarly, the psychologist Lisa Feldman Barrett describes the primary role of the predictive brain as managing a ‘body budget’. In this view, she says, ‘your brain is kind of like the financial sector of a company’, predictively allocating resources, spending energy, speculating, and seeking returns on its investments. For Barrett and her colleagues, stress is like a ‘deficit’ or ‘withdrawal’ from the body budget, while depression is bankruptcy. In Blackmore’s day, the brain was made up of sentries and soldiers, whose collective melancholy became the sadness of the human being they inhabited. Today, instead of soldiers, we imagine the brain as composed of predictive statisticians, whose errors become our neuroses. As the neuroscientist Karl Friston said: ‘[I]f the brain is an inference machine, an organ of statistics, then when it goes wrong, it’ll make the same sorts of mistakes a statistician will make.’

The strength of this association between predictive economics and brain sciences matters, because – if we aren’t careful – it can encourage us to reduce our fellow humans to mere pieces of machinery. Our brains were never computer processors, as useful as it might have been to imagine them that way every now and then. Nor are they literally prediction engines now and, should it come to pass, they will not be quantum computers. Our bodies aren’t empires that shuttle around sentrymen, nor are they corporations that need to make good on their investments. We aren’t fundamentally consumers to be tricked, enemies to be tracked, or subjects to be predicted and controlled. Whether the arena be scientific research or corporate intelligence, it becomes all too easy for us to slip into adversarial and exploitative framings of the human; as Galison wrote, ‘the associations of cybernetics (and the cyborg) with weapons, oppositional tactics, and the black-box conception of human nature do not so simply melt away.’

Friday, March 12, 2021

The Psychology of Existential Risk: Moral Judgments about Human Extinction

Schubert, S., Caviola, L. & Faber, N.S. 
Sci Rep 9, 15100 (2019). 
https://doi.org/10.1038/s41598-019-50145-9

Abstract

The 21st century will likely see growing risks of human extinction, but currently, relatively small resources are invested in reducing such existential risks. Using three samples (UK general public, US general public, and UK students; total N = 2,507), we study how laypeople reason about human extinction. We find that people think that human extinction needs to be prevented. Strikingly, however, they do not think that an extinction catastrophe would be uniquely bad relative to near-extinction catastrophes, which allow for recovery. More people find extinction uniquely bad when (a) asked to consider the extinction of an animal species rather than humans, (b) asked to consider a case where human extinction is associated with less direct harm, and (c) they are explicitly prompted to consider long-term consequences of the catastrophes. We conclude that an important reason why people do not find extinction uniquely bad is that they focus on the immediate death and suffering that the catastrophes cause for fellow humans, rather than on the long-term consequences. Finally, we find that (d) laypeople—in line with prominent philosophical arguments—think that the quality of the future is relevant: they do find extinction uniquely bad when this means forgoing a utopian future.

Discussion

Our studies show that people find that human extinction is bad, and that it is important to prevent it. However, when presented with a scenario involving no catastrophe, a near-extinction catastrophe and an extinction catastrophe as possible outcomes, they do not see human extinction as uniquely bad compared with non-extinction. We find that this is partly because people feel strongly for the victims of the catastrophes, and therefore focus on the immediate consequences of the catastrophes. The immediate consequences of near-extinction are not that different from those of extinction, so this naturally leads them to find near-extinction almost as bad as extinction. Another reason is that they neglect the long-term consequences of the outcomes. Lastly, their empirical beliefs about the quality of the future make a difference: telling them that the future will be extraordinarily good makes more people find extinction uniquely bad.

Thus, when asked in the most straightforward and unqualified way, participants do not find human extinction uniquely bad. 

Friday, February 12, 2021

Measuring Implicit Intergroup Biases.

Lai, C. K., & Wilson, M. 
(2020, December 9).

Abstract

Implicit intergroup biases are automatically activated prejudices and stereotypes that may influence judgments of others on the basis of group membership. We review evidence on the measurement of implicit intergroup biases, finding: implicit intergroup biases reflect the personal and the cultural, implicit measures vary in reliability and validity, and implicit measures vary greatly in their prediction of explicit and behavioral outcomes due to theoretical and methodological moderators. We then discuss three challenges to the application of implicit intergroup biases to real‐world problems: (1) a lack of research on social groups of scientific and public interest, (2) developing implicit measures with diagnostic capabilities, and (3) resolving ongoing ambiguities in the relationship between implicit bias and behavior. Making progress on these issues will clarify the role of implicit intergroup biases in perpetuating inequality.

(cut)

Predictive Validity

Implicit intergroup biases are predictive of explicit biases,  behavioral outcomes,  and regional differences in inequality. 

Relationship to explicit prejudice & stereotypes. 

The relationship  between implicit and explicit measures of intergroup bias is consistently positive, but the size  of the relationship depends on the topic.  In a large-scale study of 57 attitudes (Nosek, 2005), the relationship between IAT scores and explicit intergroup attitudes was as high as r= .59 (Democrats vs. Republicans) and as low as r= .33 (European Americans vs. African Americans) or r = .10 (Thin people vs. Fat people). Generally, implicit-explicit relations are lower in studies on intergroup topics than in other topics (Cameron et al., 2012; Greenwald et al., 2009).The  strength  of  the  relationship  between  implicit  and explicit  intergroup  biases  is  moderated  by  factors which have been documented in one large-scale study and  several meta-analyses   (Cameron et al., 2012; Greenwald et al., 2009; Hofmann et al., 2005; Nosek, 2005; Oswald et al., 2013). Much of this work has focused  on  the  IAT,  finding  that  implicit-explicit  relations  are  stronger  when  the  attitude  is  more  strongly elaborated, perceived as distinct from other people, has a  bipolar  structure  (i.e.,  liking  for  one  group  implies disliking  of  the  other),  and  the  explicit  measure  assesses a relative preference rather than an absolute preference (Greenwald et al., 2009; Hofmann et al., 2005; Nosek, 2005).

---------------------
Note: If you are a healthcare professional, you need to be aware of these biases.

Sunday, January 31, 2021

Free Will & The Brain

Kevin Loughran
Philosophy Now (2020)

The idea of free will touches human decision-making and action, and so the workings of the brain. So the science of the brain can inform the argument about free will. Technology, especially in the form of brain scanning, has provided new insights into what is happening in our brains prior to us taking action. And some brain studies – especially the ones led by Benjamin Libet at the University of California in San Francisco in the 1980s – have indicated the possibility of unconscious brain activity setting up our body to act on our decisions before we are conscious of having decided to act. For some people, such studies have confirmed the judgement that we lack free will. But do these studies provide sufficient data to justify such a generalisation about free will?

First, these studies do touch on the issue of how we make choices and reach decisions; but they do so in respect of some simple, and directed, tasks. For example, in one of Libet’s studies, he asked volunteers to move a hand in one direction or another and to note the time when they consciously decided to do so (50 Ideas You Really Need to Know about the Human Brain, Moher Costandi, p.60, 2013). The data these and similar brain studies provide might justly be taken to prove that when research volunteers are asked by a researcher to do one simple thing or another, and they do it, then unconscious brain processes may have moved them towards a choice a fraction of a second before they were conscious of making that choice. The question is, can they be taken to prove more than that?

To explore this question let’s first look at some of the range of choices we make in our lives day by day and week by week, then ask what they might tell us about how we come to make decisions and how this might relate to experimental results such as Libet’s. At the very least, examining the range of our choices might provide a better, wider range of research projects in the future.

Thursday, January 14, 2021

'How Did We Get Here?' A Call For An Evangelical Reckoning On Trump

Rachel Martin
NPR.org
Originally poste 13 Jan 202

Here is an excerpt:

You write that Trump has burned down the Republican Party. What has he done to the evangelical Christian movement?

If you asked today, "What's an evangelical?" to most people, I would want them to say: someone who believes Jesus died on the cross for our sin and in our place and we're supposed to tell everyone about it. But for most people they'd say, "Oh, those are those people who are really super supportive of the president no matter what he does." And I don't think that's what we want to be known for. That's certainly not what I want to be known for. And I think as this presidency is ending in tatters as it is, hopefully more and more evangelicals will say, "You know, we should have seen earlier, we should have known better, we should have honored the Lord more in our actions these last four years."

Should ministers on Sunday mornings be delivering messages about how to sort fact from fiction and discouraging their parishioners from seeking truth in these darkest corners of the Internet peddling lies?

Absolutely, absolutely. Mark Noll wrote years ago a book called The Scandal of the Evangelical Mind, and he was talking about the lack of intellectual engagement in some corners of evangelicalism.

I think the scandal of the evangelical mind today is the gullibility that so many have been brought into — conspiracy theories, false reports and more — and so I think the Christian responsibility is we need to engage in what we call in the Christian tradition, discipleship. Jesus says, "I am the way, the truth and the life." So Jesus literally identifies himself as the truth; therefore, if there ever should be a people who care about the truth, it should be people who call themselves followers of Jesus.

Tuesday, December 15, 2020

(How) Do You Regret Killing One to Save Five? Affective and Cognitive Regret Differ After Utilitarian and Deontological Decisions

Goldstein-Greenwood J, et al.
Personality and Social Psychology 
Bulletin. 2020;46(9):1303-1317. 
doi:10.1177/0146167219897662

Abstract

Sacrificial moral dilemmas, in which opting to kill one person will save multiple others, are definitionally suboptimal: Someone dies either way. Decision-makers, then, may experience regret about these decisions. Past research distinguishes affective regret, negative feelings about a decision, from cognitive regret, thoughts about how a decision might have gone differently. Classic dual-process models of moral judgment suggest that affective processing drives characteristically deontological decisions to reject outcome-maximizing harm, whereas cognitive deliberation drives characteristically utilitarian decisions to endorse outcome-maximizing harm. Consistent with this model, we found that people who made or imagined making sacrificial utilitarian judgments reliably expressed relatively more affective regret and sometimes expressed relatively less cognitive regret than those who made or imagined making deontological dilemma judgments. In other words, people who endorsed causing harm to save lives generally felt more distressed about their decision, yet less inclined to change it, than people who rejected outcome-maximizing harm.

General Discussion

Across four studies, we found that different sacrificial moral dilemma decisions elicit different degrees of affective and cognitive regret. We found robust evidence that utilitarian decision-makers who accept outcome-maximizing harm experience far more affective regret than their deontological decision-making counterparts who reject outcome-maximizing harm, and we found somewhat weaker evidence that utilitarian decision-makers experience less cognitive regret than deontological decision-makers.The significant interaction between dilemma decision and regret type predicted in H1 emerged both when participants freely endorsed dilemma decisions (Studies 1, 3, and 4) and were randomly assigned to imagine making a decision (Study 2). Hence, the present findings cannot simply be attributed to chronic differences in the types of regret that people who prioritize each decision experience. Moreover, we found tentative evidence for H2: Focusing on the counterfactual world in which they made the alternative decision attenuated utilitarian decision-makers’ heightened affective regret compared with factual reflection, and reduced differences in affective regret between utilitarian and deontological decision-makers (Study 4). Furthermore, our findings do not appear attributable to impression management concerns, as there were no differences between public and private reports of regret.

Conspiracy Theorists May Really Just Be Lonely

Matthew Hutson
Scientific American
Originally posted 1 May 17

Conspiracy theorists are often portrayed as nutjobs, but some may just be lonely, recent studies suggest. Separate research has shown that social exclusion creates a feeling of meaninglessness and that the search for meaning leads people to perceive patterns in randomness. A new study in the March issue of the Journal of Experimental Social Psychology connects the dots, reporting that ostracism enhances superstition and belief in conspiracies.

In one experiment, people wrote about a recent unpleasant interaction with friends, then rated their feelings of exclusion, their search for purpose in life, their belief in two conspiracies (that the government uses subliminal messages and that drug companies withhold cures), and their faith in paranormal activity in the Bermuda Triangle. The more excluded people felt, the greater their desire for meaning and the more likely they were to harbor suspicions.

In a second experiment, college students were made to feel excluded or included by their peers, then read two scenarios suggestive of conspiracies (price-fixing, office sabotage) and one about a made-up good-luck ritual (stomping one's feet before a meeting). Those who were excluded reported greater connection between behaviors and outcomes in the stories compared with those who were included.


Thursday, December 3, 2020

The psychologist rethinking human emotion

David Shariatmadari
The Guardian
Originally posted 25 Sept 20

Here is an excerpt:

Barrett’s point is that if you understand that “fear” is a cultural concept, a way of overlaying meaning on to high arousal and high unpleasantness, then it’s possible to experience it differently. “You know, when you have high arousal before a test, and your brain makes sense of it as test anxiety, that’s a really different feeling than when your brain makes sense of it as energised determination,” she says. “So my daughter, for example, was testing for her black belt in karate. Her sensei was a 10th degree black belt, so this guy is like a big, powerful, scary guy. She’s having really high arousal, but he doesn’t say to her, ‘Calm down’; he says, ‘Get your butterflies flying in formation.’” That changed her experience. Her brain could have made anxiety, but it didn’t, it made determination.”

In the lectures Barrett gives to explain this model, she talks of the brain as a prisoner in a dark, silent box: the skull. The only information it gets about the outside world comes via changes in light (sight), air pressure (sound) exposure to chemicals (taste and smell), and so on. It doesn’t know the causes of these changes, and so it has to guess at them in order to decide what to do next.

How does it do that? It compares those changes to similar changes in the past, and makes predictions about the current causes based on experience. Imagine you are walking through a forest. A dappled pattern of light forms a wavy black shape in front of you. You’ve seen many thousands of images of snakes in the past, you know that snakes live in the forest. Your brain has already set in train an array of predictions.

The point is that this prediction-making is consciousness, which you can think of as a constant rolling process of guesses about the world being either confirmed or proved wrong by fresh sensory inputs. In the case of the dappled light, as you step forward you get information that confirms a competing prediction that it’s just a stick: the prediction of a snake was ultimately disproved, but not before it grew so strong that neurons in your visual cortex fired as though one was actually there, meaning that for a split second you “saw” it. So we are all creating our world from moment to moment. If you didn’t, your brain wouldn’t be able make the changes necessary for your survival quickly enough. If the prediction “snake” wasn’t already in train, then the shot of adrenaline you might need in order to jump out of its way would come too late.

Sunday, November 1, 2020

Believing in Overcoming Cognitive Biases

T. S. Doherty & A. E. Carroll
AMA J Ethics. 2020;22(9):E773-778. 
doi: 10.1001/amajethics.2020.773.

Abstract

Like all humans, health professionals are subject to cognitive biases that can render diagnoses and treatment decisions vulnerable to error. Learning effective debiasing strategies and cultivating awareness of confirmation, anchoring, and outcomes biases and the affect heuristic, among others, and their effects on clinical decision making should be prioritized in all stages of education.

Here is an excerpt:

The practice of reflection reinforces behaviors that reduce bias in complex situations. A 2016 systematic review of cognitive intervention studies found that guided reflection interventions were associated with the most consistent success in improving diagnostic reasoning. A guided reflection intervention involves searching for and being open to alternative diagnoses and willingness to engage in thoughtful and effortful reasoning and reflection on one’s own conclusions, all with supportive feedback or challenge from a mentor.

The same review suggests that cognitive forcing strategies may also have some success in improving diagnostic outcomes. These strategies involve conscious consideration of alternative diagnoses other than those that come intuitively. One example involves reading radiographs in the emergency department. According to studies, a common pitfall among inexperienced clinicians in such a situation is to call off the search once a positive finding has been noticed, which often leads to other abnormalities (eg, second fractures) being overlooked. Thus, the forcing strategy in this situation would be to continue a search even after an initial fracture has been detected.

Wednesday, July 29, 2020

Survival of the Friendliest: Homo sapiens Evolved via Selection for Prosociality

Brian Hare
Annu. Rev. Psychol. 2017.68:155-186.

Abstract

The challenge of studying human cognitive evolution is identifying unique features of our intelligence while explaining the processes by which they arose. Comparisons with nonhuman apes point to our early-emerging cooperative-communicative abilities as crucial to the evolution of all forms of human cultural cognition, including language. The human self-domestication hypothesis proposes that these early-emerging social skills evolved when natural selection favored increased in-group prosociality over aggression in late human evolution. As a by-product of this selection, humans are predicted to show traits of the domestication syndrome observed in other domestic animals. In reviewing comparative, developmental, neurobiological, and paleoanthropological research, compelling evidence emerges for the predicted relationship between unique human mentalizing abilities, tolerance, and the domestication syndrome in humans. This synthesis includes a review of the first a priori test of the self-domestication hypothesis as well as predictions for future tests.

A pdf can be downloaded from here.