Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, February 16, 2021

Strategic Regulation of Empathy

Weisz, E., & Cikara, M. 
(2020, October 9).

Abstract

Empathy is an integral part of socio-emotional well-being, yet recent research has highlighted some of its downsides. Here we examine literature that establishes when, how much, and what aspects of empathy promote specific outcomes. After reviewing a theoretical framework which characterizes empathy as a suite of separable components, we examine evidence showing how dissociations of these components affect important socio-emotional outcomes and describe emerging evidence suggesting that these components can be independently and deliberately modulated. Finally, we advocate for a new approach to a multi-component view of empathy which accounts for the interrelations among components. This perspective advances scientific conceptualization of empathy and offers suggestions for tailoring empathy to help people realize their social, emotional, and occupational goals.

From Concluding Remarks

Early research on empathy regarded it as a monolithic construct. This characterization ultimately gave rise to a second wave of empathy-related research, which explicitly examined dissociations among empathy-related components.Subsequently, researchers noticed that individual components held different predictive power over key outcomes such as helping and occupational burnout. As described above, however, there are many instances in which these components track together in the real world, suggesting that although they can dissociate, they often operate in tandem.

Because empathy-related components rely on separable neural systems, the field of social neuroscience has already made significant progress toward the goal of characterizing instances when components do (or do not) track together.  For example, although affective and cognitive channels can independently contribute to judgments of others emotional states, they also operate in synchrony during more naturalistic socio-emotional tasks.  However, far more behavioral research is needed to characterize the co-occurrence of components in people’s everyday social interactions.  Because people differ in their tendencies to engage distinct components of empathy, a better understanding of the separability and interrelations of these components in real-world social scenarios can help tailor empathy-training programs to promote desirable outcomes.  Empathy-training efforts are on average effective (Hedges’ g = 0.51) but generally intervene on empathy as a whole (rather than specific components). 

Monday, February 15, 2021

Response time modelling reveals evidence for multiple, distinct sources of moral decision caution

Andrejević, M., et al. 
(2020, November 13). 

Abstract

People are often cautious in delivering moral judgments of others’ behaviours, as falsely accusing others of wrongdoing can be costly for social relationships. Caution might further be present when making judgements in information-dynamic environments, as contextual updates can change our minds. This study investigated the processes with which moral valence and context expectancy drive caution in moral judgements. Across two experiments, participants (N = 122) made moral judgements of others’ sharing actions. Prior to judging, participants were informed whether contextual information regarding the deservingness of the recipient would follow. We found that participants slowed their moral judgements when judging negatively valenced actions and when expecting contextual updates. Using a diffusion decision model framework, these changes were explained by shifts in drift rate and decision bias (valence) and boundary setting (context), respectively. These findings demonstrate how moral decision caution can be decomposed into distinct aspects of the unfolding decision process.

From the Discussion

Our findings that participants slowed their judgments when expecting contextual information is consistent with previous research showing that people are more cautious when aware that they are more prone to making mistakes. Notably, previous research has demonstrated this effect for decision mistakes in tasks in which people are not given additional information or a chance to change their minds.The current findings show that this effect also extends to dynamic decision-making contexts, in which learning additional information can lead to changes of mind. Crucially, here we show that this type of caution can be explained by the widening of the decision boundary separation in a process model of decision-making.

Sunday, February 14, 2021

Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable?

Frank, L., Nyholm, S. 
Artif Intell Law 25, 305–323 (2017).
https://doi.org/10.1007/s10506-017-9212-y

Abstract

The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. We consider reasons for giving both “no” and “yes” answers to these three questions by examining the concept of consent in general, as well as critiques of its adequacy in the domain of sexual ethics; the relationship between consent and free will; and the relationship between consent and consciousness. Additionally we canvass the most influential existing literature on the ethics of sex with robots.

Here is an excerpt:

Here, we want to ask a similar question regarding how and whether sex robots should be brought into the legal community. Our overarching question is: is it conceivable, possible, and desirable to create autonomous and smart sex robots that are able to give (or withhold) consent to sex with a human person? For each of these three sub-questions (whether it is conceivable, possible, and desirable to create sex robots that can consent) we consider both “no” and “yes” answers. We are here mainly interested in exploring these questions in general terms and motivating further discussion. However, in discussing each of these sub-questions we will argue that, prima facie, the “yes” answers appear more convincing than the “no” answers—at least if the sex robots are of a highly sophisticated sort.Footnote4

The rest of our discussion divides into the following sections. We start by saying a little more about what we understand by a “sex robot”. We also say more about what consent is, and we review the small literature that is starting to emerge on our topic (Sect. 1). We then turn to the questions of whether it is conceivable, possible, and desirable to create sex robots capable of giving consent—and discuss “no” and “yes” answers to all of these questions. When we discuss the case for considering it desirable to require robotic consent to sex, we argue that there can be both non-instrumental and instrumental reasons in favor of such a requirement (Sects. 2–4). We conclude with a brief summary (Sect. 5).

Saturday, February 13, 2021

Allocating moral responsibility to multiple agents

Gantman, A. P., Sternisko, A., et al.
Journal of Experimental Social Psychology
Volume 91, November 2020, 

Abstract

Moral and immoral actions often involve multiple individuals who play different roles in bringing about the outcome. For example, one agent may deliberate and decide what to do while another may plan and implement that decision. We suggest that the Mindset Theory of Action Phases provides a useful lens through which to understand these cases and the implications that these different roles, which correspond to different mindsets, have for judgments of moral responsibility. In Experiment 1, participants learned about a disastrous oil spill in which one company made decisions about a faulty oil rig, and another installed that rig. Participants judged the company who made decisions as more responsible than the company who implemented them. In Experiment 2 and a direct replication, we tested whether people judge implementers to be morally responsible at all. We examined a known asymmetry in blame and praise. Moral agents received blame for actions that resulted in a bad outcome but not praise for the same action that resulted in a good outcome. We found this asymmetry for deciders but not implementers, an indication that implementers were judged through a moral lens to a lesser extent than deciders. Implications for allocating moral responsibility across multiple agents are discussed.

Highlights

• Acts can be divided into parts and thereby roles (e.g., decider, implementer).

• Deliberating agent earns more blame than implementing one for a bad outcome.

• Asymmetry in blame vs. praise for the decider but not the implementer

• Asymmetry in blame vs. praise suggests only the decider is judged as moral agent

• Effect is attenuated if decider's job is primarily to implement.

Friday, February 12, 2021

Measuring Implicit Intergroup Biases.

Lai, C. K., & Wilson, M. 
(2020, December 9).

Abstract

Implicit intergroup biases are automatically activated prejudices and stereotypes that may influence judgments of others on the basis of group membership. We review evidence on the measurement of implicit intergroup biases, finding: implicit intergroup biases reflect the personal and the cultural, implicit measures vary in reliability and validity, and implicit measures vary greatly in their prediction of explicit and behavioral outcomes due to theoretical and methodological moderators. We then discuss three challenges to the application of implicit intergroup biases to real‐world problems: (1) a lack of research on social groups of scientific and public interest, (2) developing implicit measures with diagnostic capabilities, and (3) resolving ongoing ambiguities in the relationship between implicit bias and behavior. Making progress on these issues will clarify the role of implicit intergroup biases in perpetuating inequality.

(cut)

Predictive Validity

Implicit intergroup biases are predictive of explicit biases,  behavioral outcomes,  and regional differences in inequality. 

Relationship to explicit prejudice & stereotypes. 

The relationship  between implicit and explicit measures of intergroup bias is consistently positive, but the size  of the relationship depends on the topic.  In a large-scale study of 57 attitudes (Nosek, 2005), the relationship between IAT scores and explicit intergroup attitudes was as high as r= .59 (Democrats vs. Republicans) and as low as r= .33 (European Americans vs. African Americans) or r = .10 (Thin people vs. Fat people). Generally, implicit-explicit relations are lower in studies on intergroup topics than in other topics (Cameron et al., 2012; Greenwald et al., 2009).The  strength  of  the  relationship  between  implicit  and explicit  intergroup  biases  is  moderated  by  factors which have been documented in one large-scale study and  several meta-analyses   (Cameron et al., 2012; Greenwald et al., 2009; Hofmann et al., 2005; Nosek, 2005; Oswald et al., 2013). Much of this work has focused  on  the  IAT,  finding  that  implicit-explicit  relations  are  stronger  when  the  attitude  is  more  strongly elaborated, perceived as distinct from other people, has a  bipolar  structure  (i.e.,  liking  for  one  group  implies disliking  of  the  other),  and  the  explicit  measure  assesses a relative preference rather than an absolute preference (Greenwald et al., 2009; Hofmann et al., 2005; Nosek, 2005).

---------------------
Note: If you are a healthcare professional, you need to be aware of these biases.

Thursday, February 11, 2021

Paranoia and Belief Updating During a Crisis

Suthaharan, P., Reed, E.,  et al. 
(2020, September 4). 

Abstract

The 2019 coronavirus (COVID-19) pandemic has made the world seem unpredictable. During such crises we can experience concerns that others might be against us, culminating perhaps in paranoid conspiracy theories. Here, we investigate paranoia and belief updating in an online sample (N=1,010) in the United States of America (U.S.A). We demonstrate the pandemic increased individuals’ self-rated paranoia and rendered their task-based belief updating more erratic. Local lockdown and reopening policies, as well as culture more broadly, markedly influenced participants’ belief-updating: an early and sustained lockdown rendered people’s belief updating less capricious. Masks are clearly an effective public health measure against COVID-19. However, state-mandated mask wearing increased paranoia and induced more erratic behaviour. Remarkably, this was most evident in those states where adherence to mask wearing rules was poor but where rule following is typically more common. This paranoia may explain the lack of compliance with this simple and effective countermeasure. Computational analyses of participant behaviour suggested that people with higher paranoia expected the task to be more unstable, but at the same time predicted more rewards. In a follow-up study we found people who were more paranoid endorsed conspiracies about mask-wearing and potential vaccines – again, mask attitude and conspiratorial beliefs were associated with erratic task behaviour and changed priors. Future public health responses to the pandemic might leverage these observations, mollifying paranoia and increasing adherence by tempering people’s expectations of other’s behaviour, and the environment more broadly, and reinforcing compliance.

Wednesday, February 10, 2021

Beyond Moral Dilemmas: The Role of Reasoning in Five Categories of Utilitarian Judgment

F. Jaquet & F. Cova
Cognition, Volume 209, 
April 2021, 104572

Abstract

Over the past two decades, the study of moral reasoning has been heavily influenced by Joshua Greene’s dual-process model of moral judgment, according to which deontological judgments are typically supported by intuitive, automatic processes while utilitarian judgments are typically supported by reflective, conscious processes. However, most of the evidence gathered in support of this model comes from the study of people’s judgments about sacrificial dilemmas, such as Trolley Problems. To which extent does this model generalize to other debates in which deontological and utilitarian judgments conflict, such as the existence of harmless moral violations, the difference between actions and omissions, the extent of our duties of assistance, and the appropriate justification for punishment? To find out, we conducted a series of five studies on the role of reflection in these kinds of moral conundrums. In Study 1, participants were asked to answer under cognitive load. In Study 2, participants had to answer under a strict time constraint. In Studies 3 to 5, we sought to promote reflection through exposure to counter-intuitive reasoning problems or direct instruction. Overall, our results offer strong support to the extension of Greene’s dual-process model to moral debates on the existence of harmless violations and partial support to its extension to moral debates on the extent of our duties of assistance.

From the Discussion Section

The results of  Study 1 led  us to conclude  that certain forms  of utilitarian judgments  require more cognitive resources than their deontological counterparts. The results of Studies 2 and 5 suggest  that  making  utilitarian  judgments  also  tends  to  take  more  time  than  making deontological judgments. Finally, because our manipulation was unsuccessful, Studies 3 and 4 do not allow us to conclude anything about the intuitiveness of utilitarian judgments. In Study 5, we were nonetheless successful in manipulating participants’ reliance on their intuitions (in the  Intuition  condition),  but this  did not  seem to  have any  effect on  the  rate  of utilitarian judgments.  Overall,  our  results  allow  us  to  conclude  that,  compared  to  deontological judgments,  utilitarian  judgments  tend  to  rely  on  slower  and  more  cognitively  demanding processes.  But  they  do  not  allow  us  to  conclude  anything  about  characteristics  such  as automaticity  or  accessibility  to consciousness:  for  example,  while solving  a very  difficult math problem might take more time and require more resources than solving a mildly difficult math problem, there is no reason to think that the former process is more automatic and more conscious than the latter—though it will clearly be experienced as more effortful. Moreover, although one could be tempted to  conclude that our data show  that, as predicted by Greene, utilitarian  judgment  is  experienced  as  more  effortful  than  deontological  judgment,  we collected no data  about such  experience—only data  about the  role of cognitive resources in utilitarian  judgment.  Though  it  is  reasonable  to  think  that  a  process  that  requires  more resources will be experienced as more effortful, we should keep in mind that this conclusion is based on an inferential leap. 

Tuesday, February 9, 2021

Neanderthals And Humans Were at War For Over 100,000 Years, Evidence Shows

Nicholas Longrich
The Conversation
Originally posted 3 Nov 20

Here is an excerpt:

Why else would we take so long to leave Africa? Not because the environment was hostile but because Neanderthals were already thriving in Europe and Asia.

It's exceedingly unlikely that modern humans met the Neanderthals and decided to just live and let live. If nothing else, population growth inevitably forces humans to acquire more land, to ensure sufficient territory to hunt and forage food for their children.

But an aggressive military strategy is also good evolutionary strategy.

Instead, for thousands of years, we must have tested their fighters, and for thousands of years, we kept losing. In weapons, tactics, strategy, we were fairly evenly matched.

Neanderthals probably had tactical and strategic advantages. They'd occupied the Middle East for millennia, doubtless gaining intimate knowledge of the terrain, the seasons, how to live off the native plants and animals.

In battle, their massive, muscular builds must have made them devastating fighters in close-quarters combat. Their huge eyes likely gave Neanderthals superior low-light vision, letting them manoeuvre in the dark for ambushes and dawn raids.

Sapiens victorious

Finally, the stalemate broke, and the tide shifted. We don't know why. It's possible the invention of superior ranged weapons – bows, spear-throwers, throwing clubs – let lightly-built Homo sapiens harass the stocky Neanderthals from a distance using hit-and-run tactics.

Or perhaps better hunting and gathering techniques let sapiens feed bigger tribes, creating numerical superiority in battle.

Even after primitive Homo sapiens broke out of Africa 200,000 years ago, it took over 150,000 years to conquer Neanderthal lands. In Israel and Greece, archaic Homo sapiens took ground only to fall back against Neanderthal counteroffensives, before a final offensive by modern Homo sapiens, starting 125,000 years ago, eliminated them.

Monday, February 8, 2021

The Origins and Psychology of Human Cooperation

Joseph Henrich and Michael Muthukrishna
Annual Review of Psychology 2021 72:1, 207-240

Abstract

Humans are an ultrasocial species. This sociality, however, cannot be fully explained by the canonical approaches found in evolutionary biology, psychology, or economics. Understanding our unique social psychology requires accounting not only for the breadth and intensity of human cooperation but also for the variation found across societies, over history, and among behavioral domains. Here, we introduce an expanded evolutionary approach that considers how genetic and cultural evolution, and their interaction, may have shaped both the reliably developing features of our minds and the well-documented differences in cultural psychologies around the globe. We review the major evolutionary mechanisms that have been proposed to explain human cooperation, including kinship, reciprocity, reputation, signaling, and punishment; we discuss key culture–gene coevolutionary hypotheses, such as those surrounding self-domestication and norm psychology; and we consider the role of religions and marriage systems. Empirically, we synthesize experimental and observational evidence from studies of children and adults from diverse societies with research among nonhuman primates.

From the Discussion

Understanding the origins and psychology of human cooperation is an exciting and rapidly developing enterprise. Those interested in engaging with this grand question should consider three elements of this endeavor: (1) theoretical frameworks, (2) diverse methods, and (3) history. To the first, the extended evolutionary framework we described comes with a rich body of theories and hypotheses as well as tools for developing new theories, about both human nature and cultural psychology. We encourage psychologists to take the formal theory seriously and learn to read the primary literature (McElreath & Boyd 2007). Second, the nature of human cooperation demands cross-cultural, comparative and developmental approaches that integrate experiments, observation, and ethnography. Haphazard cross-country cyber sampling is less efficient than systematic tests with populations based on theoretical predictions. Finally, the evidence makes it clear that as norms evolve over time, so does our psychology; historical differences can tell us a lot about contemporary psychological patterns. This means that researchers need to think about psychology from a historical perspective and begin to devise ways to bring history and psychology together (Muthukrishna et al. 2020).