Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, February 28, 2022

Bridging Political Divides by Correcting the Basic Morality Bias

Puryear, C., Kubin, E., Schein, et al. 
(2022, January 11).
https://doi.org/10.31234/osf.io/fk8g6

Abstract

Efforts to bridge political divides often focus on navigating complex and divisive issues. However, nine studies suggest that we should also focus on a more basic moral divide: the erroneous belief that political opponents lack a fundamental sense of right and wrong. This “basic morality bias” is tied to political dehumanization and is revealed by multiple methods, including natural language analyses from a large Twitter corpus, and a representative survey of Americans with incentives for accuracy. In the US, both Democrats and Republicans substantially overestimate the number of political outgroup members who approve of blatant wrongs (e.g., child pornography, embezzlement). Importantly, the basic morality bias can be corrected with a brief, scalable intervention. Providing information that just one political opponent condemns blatant wrongs increases willingness to work with political opponents and substantially decreases political dehumanization.

From the General Discussion

These findings provide vital insights into why the United States finds itself burdened by political gridlock, partisanship, and high levels of political dehumanization. It may be difficult to imagine how disagreement over details of political policy can make partisans unwilling to even speak to one another or see each other as equally human. However, it could be that Americans do not see themselves in conflict with an alternative ideology but with opponents who lack a moral compass entirely. Believing others lack this fundamental component of humanity has fueled intergroup conflict throughout history. If the political climate in America continues down this path, then it may not be surprising to see two parties—who believe each other embrace murder and theft—continue to escalate conflict.  

Fortunately, our results unveil a simple intervention with both large and broad effects upon the basic morality bias. Telling others that we oppose wrongs as basic as murder seems like it should provide no new information capable of altering how others see us. This was also supported by a pilot study showing that participants do not expect information about basic moral judgments to generally impact their evaluations of others. However, because the basic morality bias is common in the political domain, assuring opponents that we have even the most minimal moral capacities improves their willingness to engage with us. Most importantly, our results suggest that even just one person who successfully communicates their basic moral values has the potential to make their entire political party seem more moral and human.

Sunday, February 27, 2022

Moral Leadership in the 2016 U.S. Presidential Election

W. Kidd & J. A. Vitriol
Political Psychology
First published: 27 September 2021

Abstract

Voters commonly revise their political beliefs to align with the political leaders with whom they strongly identify, suggesting voters lack a coherent ideological structure causally prior to their political loyalties. Alternatively, voters may organize their preferences around nonideological concepts or values, such as moral belief. Using a four-wave panel study during the 2016 election, we examine the relationship between voters' own moral foundations and their perceptions of the candidates' moral beliefs. We observed a bidirectional relationship among Republicans, who revised both their own moral beliefs and their perceptions of Donald Trump to reduce incongruities. In contrast, Democrats revised their perceptions of Hillary Clinton to align with their own moral beliefs. Importantly, consistency between voters' and political candidates' moral beliefs was more common among partisans and led to polarized evaluations of the two candidates on Election Day.


From a PsyPost interview:

Trump supporters also appeared to adjust their moral foundations from to align more closely with their perceptions of Trump’s moral foundations. Perceptions of Trump at wave two changed how his supporters perceived their own moral beliefs at wave three. But this pattern was not found among Clinton supporters, who did not adjust their own moral beliefs.

“Political leadership is moral leadership,” the researchers told PsyPost. “Many voters revise even their fundamental views of what they describe as right and wrong based on their perceptions of the candidates they support. Ideas and positions that might have seemed out of bounds can become normalized very quickly if they receive support from political leaders.”

“That voters adjust their ‘perceptions’ of the candidates is also likely a reason partisan conflict often seems so intractable, as voters from each party may not even share a common understanding of the candidates in question, limiting any form of reasoned debate.”

Saturday, February 26, 2022

Experts Are Ringing Alarms About Elon Musk’s Brain Implants

Noah Kirsch
Daily Beast
Posted 25 Jan 2021

Here is an excerpt:

“These are very niche products—if we’re really only talking about developing them for paralyzed individuals—the market is small, the devices are expensive,” said Dr. L. Syd Johnson, an associate professor in the Center for Bioethics and Humanities at SUNY Upstate Medical University.

“If the ultimate goal is to use the acquired brain data for other devices, or use these devices for other things—say, to drive cars, to drive Teslas—then there might be a much, much bigger market,” she said. “But then all those human research subjects—people with genuine needs—are being exploited and used in risky research for someone else’s commercial gain.”

In interviews with The Daily Beast, a number of scientists and academics expressed cautious hope that Neuralink will responsibly deliver a new therapy for patients, though each also outlined significant moral quandaries that Musk and company have yet to fully address.

Say, for instance, a clinical trial participant changes their mind and wants out of the study, or develops undesirable complications. “What I’ve seen in the field is we’re really good at implanting [the devices],” said Dr. Laura Cabrera, who researches neuroethics at Penn State. “But if something goes wrong, we really don't have the technology to explant them” and remove them safely without inflicting damage to the brain.

There are also concerns about “the rigor of the scrutiny” from the board that will oversee Neuralink’s trials, said Dr. Kreitmair, noting that some institutional review boards “have a track record of being maybe a little mired in conflicts of interest.” She hoped that the high-profile nature of Neuralink’s work will ensure that they have “a lot of their T’s crossed.”

The academics detailed additional unanswered questions: What happens if Neuralink goes bankrupt after patients already have devices in their brains? Who gets to control users’ brain activity data? What happens to that data if the company is sold, particularly to a foreign entity? How long will the implantable devices last, and will Neuralink cover upgrades for the study participants whether or not the trials succeed?

Dr. Johnson, of SUNY Upstate, questioned whether the startup’s scientific capabilities justify its hype. “If Neuralink is claiming that they’ll be able to use their device therapeutically to help disabled persons, they’re overpromising because they’re a long way from being able to do that.”

Neuralink did not respond to a request for comment as of publication time.

Friday, February 25, 2022

Public Deliberation about Gene Editing in the Wild

M. K. Gusmano, E. Kaebnick, et al. (2021).
Hastings Center Report
10.1002/hast.1318, 51, S2, (S34-S41).

Abstract

Genetic editing technologies have long been used to modify domesticated nonhuman animals and plants. Recently, attention and funding have also been directed toward projects for modifying nonhuman organisms in the shared environment—that is, in the “wild.” Interest in gene editing nonhuman organisms for wild release is motivated by a variety of goals, and such releases hold the possibility of significant, potentially transformative benefit. The technologies also pose risks and are often surrounded by a high uncertainty. Given the stakes, scientists and advisory bodies have called for public engagement in the science, ethics, and governance of gene editing research in nonhuman organisms. Most calls for public engagement lack details about how to design a broad public deliberation, including questions about participation, how to structure the conversations, how to report on the content, and how to link the deliberations to policy. We summarize the key design elements that can improve broad public deliberations about gene editing in the wild.

Here is the gist of the paper:

We draw on interdisciplinary scholarship in bioethics, political science, and public administration to move forward on this knot of conceptual, normative, and practical problems. When is broad public deliberation about gene editing in the wild necessary? And when it is required, how should it be done? These questions lead to a suite of further questions about, for example, the rationale and goals of deliberation, the features of these technologies that make public deliberation appropriate or inappropriate, the criteria by which “stakeholders” and “relevant publics” for these uses might be identified, how different approaches to public deliberation map onto the challenges posed by the technologies, how the topic to be deliberated upon should be framed, and how the outcomes of public deliberation can be meaningfully connected to policy-making.

Thursday, February 24, 2022

Robot performs first laparoscopic surgery without human help (and outperformed human doctors)

Johns Hopkins University. (2022, January 26). 
ScienceDaily. Retrieved January 28, 2022

A robot has performed laparoscopic surgery on the soft tissue of a pig without the guiding hand of a human -- a significant step in robotics toward fully automated surgery on humans. Designed by a team of Johns Hopkins University researchers, the Smart Tissue Autonomous Robot (STAR) is described today in Science Robotics.

"Our findings show that we can automate one of the most intricate and delicate tasks in surgery: the reconnection of two ends of an intestine. The STAR performed the procedure in four animals and it produced significantly better results than humans performing the same procedure," said senior author Axel Krieger, an assistant professor of mechanical engineering at Johns Hopkins' Whiting School of Engineering.

The robot excelled at intestinal anastomosis, a procedure that requires a high level of repetitive motion and precision. Connecting two ends of an intestine is arguably the most challenging step in gastrointestinal surgery, requiring a surgeon to suture with high accuracy and consistency. Even the slightest hand tremor or misplaced stitch can result in a leak that could have catastrophic complications for the patient.

Working with collaborators at the Children's National Hospital in Washington, D.C. and Jin Kang, a Johns Hopkins professor of electrical and computer engineering, Krieger helped create the robot, a vision-guided system designed specifically to suture soft tissue. Their current iteration advances a 2016 model that repaired a pig's intestines accurately, but required a large incision to access the intestine and more guidance from humans.

The team equipped the STAR with new features for enhanced autonomy and improved surgical precision, including specialized suturing tools and state-of-the art imaging systems that provide more accurate visualizations of the surgical field.

Soft-tissue surgery is especially hard for robots because of its unpredictability, forcing them to be able to adapt quickly to handle unexpected obstacles, Krieger said. The STAR has a novel control system that can adjust the surgical plan in real time, just as a human surgeon would.

Wednesday, February 23, 2022

I See Color

Khama Ennis
On The Flip Side
Original date: February 13, 2020

9 minutes worth watching: Patient biases versus professional obligations

Tuesday, February 22, 2022

Copy the In-group: Group Membership Trumps Perceived Reliability, Warmth, and Competence in a Social-Learning Task

Montrey, M., & Shultz, T. R. (2022). 
Psychological Science, 33(1), 165–174. 
https://doi.org/10.1177/09567976211032224

Abstract

Surprisingly little is known about how social groups influence social learning. Although several studies have shown that people prefer to copy in-group members, these studies have failed to resolve whether group membership genuinely affects who is copied or whether group membership merely correlates with other known factors, such as similarity and familiarity. Using the minimal-group paradigm, we disentangled these effects in an online social-learning game. In a sample of 540 adults, we found a robust in-group-copying bias that (a) was bolstered by a preference for observing in-group members; (b) overrode perceived reliability, warmth, and competence; (c) grew stronger when social information was scarce; and (d) even caused cultural divergence between intermixed groups. These results suggest that people genuinely employ a copy-the-in-group social-learning strategy, which could help explain how inefficient behaviors spread through social learning and how humans maintain the cultural diversity needed for cumulative cultural evolution.

From the Discussion

In fact, if people are predisposed to copy in-group members, perhaps even when their perceived competence is low, this could help explain the spread of inefficient or even deleterious behaviors. For example, opposition to vaccination is often disseminated through highly clustered and enclosed online communities (Yuan & Crooks, 2018) who use in-group-focused language (Mitra et al., 2016). Likewise, fake news tends to spread among politically aligned individuals (Grinberg et al., 2019), and the most effective puppet accounts prefer to portray themselves as in-group members rather than as knowledgeable experts (Xia et al., 2019). Our research also sheds light on why social media platforms seem especially prone to spreading misinformation. By offering such fine-grained control over whom users observe, these platforms may spur the creation of homogeneous social networks, in which individuals are more inclined to copy others because they belong to the same social group.

Monday, February 21, 2022

Fast response times signal social connection in conversation

Templeton, E. M. et al.
Proceedings of the National Academy of Sciences 
Jan 2022, 119 (4) e2116915119

Abstract

Clicking is one of the most robust metaphors for social connection. But how do we know when two people "click"? We asked pairs of friends and strangers to talk with each other and rate their felt connection. For both friends and strangers, speed in response was a robust predictor of feeling connected. Conversations with faster response times felt more connected than conversations with slower response times, and within conversations, connected moments had faster response times than less-connected moments. This effect was determined primarily by partner responsivity: People felt more connected to the degree that their partner responded quickly to them rather than by how quickly they responded to their partner. The temporal scale of these effects (<250 ms) precludes conscious control, thus providing an honest signal of connection. Using a round-robin design in each of six closed networks, we show that faster responders evoked greater feelings of connection across partners. Finally, we demonstrate that this signal is used by third-party listeners as a heuristic of how well people are connected: Conversations with faster response times were perceived as more connected than the same conversations with slower response times. Together, these findings suggest that response times comprise a robust and sufficient signal of whether two minds “click.”

Significance

Social connection is critical for our mental and physical health yet assessing and measuring connection has been challenging. Here, we demonstrate that a feature intrinsic to conversation itself—the speed with which people respond to each other—is a simple, robust, and sufficient metric of social connection. Strangers and friends feel more connected when their conversation partners respond quickly. Because extremely short response times (<250 ms) preclude conscious control, they provide an honest signal that even eavesdroppers use to judge how well two people “click.”

Sunday, February 20, 2022

The Pervasive Impact of Ignorance

Kirfel, L., & Phillips, J. S. 
(2022, January 16). 
https://doi.org/10.31234/osf.io/xbrnj

Abstract

Norm violations have been demonstrated to impact a wide range of seemingly non-normative judgments. Among other things, when agents' actions violate prescriptive norms they tend to be seen as having done those actions more freely, as having acted more intentionally, as being more of a cause of subsequent outcomes, and even as being less happy. The explanation of this effect continues to be debated, with some researchers appealing to features of actions that violate norms, and other researchers emphasizing the importance of agents' mental states when acting. Here, we report the results of two large-scale experiments that replicate and extend twelve of the studies that originally demonstrated the pervasive impact of norm violations. In each case, we build on the pre-existing experimental paradigms to additionally manipulate whether the agents knew that they were violating a norm while holding fixed the action done. We find evidence for a pervasive impact of ignorance: the impact of norm violations on non-normative judgments depends largely on the agent knowing that they were violating a norm when acting. Moreover, we find evidence that the reduction in the impact of normality is underpinned by people's counterfactual reasoning: people are less likely to consider an alternative to the agent’s action if the agent is ignorant. We situate our findings in the wider debate around the role of normality in people's reasoning.

General Discussion

Motivated Moral Cognition

On the one hand, blame-based accounts may try and use this discovery to their ad-vantage by arguing that an agent’s knowledge is directly relevant to whether they should be blamed (Cushman et al., 2008; Cushman, Sheketoff, Wharton, & Carey, 2013; Laurent, Nuñez, & Schweitzer, 2015; Yuill & Perner, 1988), and thus that these effects reflect that theimpact of normality arises from the motivation to blame or hold agents responsible for theiractions (Alicke & Rose, 2012; Livengood et al., 2017; Samland & Waldmann, 2016). For example, the tendency to report that agents who bring about harm acted intentionally may serve to corroborate people’s desire to judge the agent’s behaviour negatively (Nadelhoffer, 2004; Rogers et al., 2019). Motivated accounts differ in terms of exactly which moral judgment is argued to be at stake, i.e. whether norm-violations elicit a desire to punish (Clarket al., 2014), to blame (Alicke & Rose, 2012; Hindriks et al., 2016), to hold accountable (Samland & Waldmann, 2016) or responsible (Sytsma, 2020a), and whether its influence works in form of a cognitive bias (Alicke, 2000), or a more affective response (Nadelhoffer,2004). Common to all, however, is the assumption that it is the impetus to morally condemn the norm-violating agent that underlies exaggerated attributions of specific properties, from free will to intentional action.

Our study puts an important constraint on how the normative judgment that motivated reasoning accounts assume might work. To account for our findings, motivated ac-counts cannot generally appeal to whether an agent’s action violated a clear norm, but have to take into account whether people would all-things-considered blame the agent (Driver,2017). In that sense, the mere violation of a norm must not, itself, suffice to trigger the relevant blame response. Rather, the perception of this norm violation must occur in con-junction with an assessment of the epistemic state of the agent such that the relevant motivated reasoning is only elicited when the agent is aware of the immorality of their action. For example, Alicke and Rose’s 2012 Culpable Control Model holds that immediate negative evaluative reactions of an agent’s behaviours often cause people to interpret all other agential features in a way that justifies blaming the agent. Such accounts face a challenge. On the one hand, they seem committed to the idea that people should discount the agent’s ignorance to support their immediate negative evaluation of the harm causing actions. On the other hand, they need to account for the fact that people seem to be sensitive to fine-grained epistemic features of the agent when forming their negative evaluation of the harm causing action.

Saturday, February 19, 2022

Meta-analysis of human prediction error for incentives, perception, cognition, and action

Corlett, P.R., Mollick, J.A. & Kober, H.
Neuropsychopharmacol. (2022). 
https://doi.org/10.1038/s41386-021-01264-3

Abstract

Prediction errors (PEs) are a keystone for computational neuroscience. Their association with midbrain neural firing has been confirmed across species and has inspired the construction of artificial intelligence that can outperform humans. However, there is still much to learn. Here, we leverage the wealth of human PE data acquired in the functional neuroimaging setting in service of a deeper understanding, using an MKDA (multi-level kernel-based density) meta-analysis. Studies were identified with Google Scholar, and we included studies with healthy adult participants that reported activation coordinates corresponding to PEs published between 1999–2018. Across 264 PE studies that have focused on reward, punishment, action, cognition, and perception, consistent with domain-general theoretical models of prediction error we found midbrain PE signals during cognitive and reward learning tasks, and an insula PE signal for perceptual, social, cognitive, and reward prediction errors. There was evidence for domain-specific error signals––in the visual hierarchy during visual perception, and the dorsomedial prefrontal cortex during social inference. We assessed bias following prior neuroimaging meta-analyses and used family-wise error correction for multiple comparisons. This organization of computation by region will be invaluable in building and testing mechanistic models of cognitive function and dysfunction in machines, humans, and other animals. Limitations include small sample sizes and ROI masking in some included studies, which we addressed by weighting each study by sample size, and directly comparing whole brain vs. ROI-based results.

Discussion

There appeared to be regionally compartmentalized PEs for primary and secondary rewards. Primary rewards elicited PEs in the dorsal striatum and amygdala, while secondary reward PEs were in ventral striatum. This is consistent with the representational transition that occurs with learning. We also found separable PEs for valence domains: caudal regions of the caudate-putamen are involved in the learning of safety signals and avoidance learning, more anterior striatum is selective for rewards, while more posterior is selective for losses. We found posterior midbrain aversive PE, consistent with preclinical findings that dopamine neurons––which respond to negative valence––are located more posteriorly in the midbrain and project to medial prefrontal regions. Additionally, we found both appetitive and aversive PEs in the amygdala, consistent with animal studies. The presence of both appetitive and aversive PE signals in the amygdala is consistent with its expanding role regulating learning based on surprise and uncertainty rather than fear per se. 

Perhaps conspicuous in its absence, given preclinical work, is the hippocampus, which is often held to be a nexus for reward PE, memory PE, and perceptual PE. This may be because the hippocampus is constantly and commonly engaged throughout task performance. Its PEs may not be resolved by the sluggish BOLD response, which is based on local field potentials and may represent the projections into a region (and therefore the striatal PE signals we observed may be the culmination of the processing in CA1, CA3, and subiculum). Furthermore, we have only recently been able to image subfields of the hippocampus (with higher field strengths and more rapid sequences); as higher resolution PE papers accrue we will revisit the meta-analysis of PEs.

Friday, February 18, 2022

Measuring Impartial Beneficence: A Kantian Perspective on the Oxford Utilitarianism Scale

Mihailov, E. (2022). 
Rev.Phil.Psych.
https://doi.org/10.1007/s13164-021-00600-2

Abstract

To capture genuine utilitarian tendencies, (Kahane et al., Psychological Review 125:131, 2018) developed the Oxford Utilitarianism Scale (OUS) based on two subscales, which measure the commitment to impartial beneficence and the willingness to cause harm for the greater good. In this article, I argue that the impartial beneficence subscale, which breaks ground with previous research on utilitarian moral psychology, does not distinctively measure utilitarian moral judgment. I argue that Kantian ethics captures the all-encompassing impartial concern for the well-being of all human beings. The Oxford Utilitarianism Scale draws, in fact, a point of division that places Kantian and utilitarian theories on the same track. I suggest that the impartial beneficence subscale needs to be significantly revised in order to capture distinctively utilitarian judgments. Additionally, I propose that psychological research should focus on exploring multiple sources of the phenomenon of impartial beneficence without categorizing it as exclusively utilitarian.

Conclusion

The narrow focus of psychological research on sacrificial harm contributes to a Machiavellian picture of utilitarianism. By developing the Oxford Utilitarianism Scale, Kahane and his colleagues have shown how important it is for the study of moral judgment to include the inspiring ideal of impartial concern. However, this significant contribution goes beyond the utilitarian/deontological divide. We learn to divide moral theories depending on whether they are, at the root, either Kantian or utilitarian. Kant famously denounced lying, even if it would save someone’s life, whereas utilitarianism accepts transgression of moral rules if it maximizes the greater good. However, in regard to promoting the ideal of impartial beneficence, Kantian ethics and utilitarianism overlap because both theories contributed to the Enlightenment project of moral reform. In Kantian ethics, the very concepts of duty and moral community are interpreted in radically impartial and cosmopolitan terms. Thus, a fruitful area for future research opens on exploring the diverse psychological sources of impartial beneficence.

Thursday, February 17, 2022

Filling the gaps: Cognitive control as a critical lens for understanding mechanisms of value-based decision-making.

Frömer, R., & Shenhav, A. (2021, May 17). 
https://doi.org/10.31234/osf.io/dnvrj

Abstract

While often seeming to investigate rather different problems, research into value-based decision making and cognitive control have historically offered parallel insights into how people select thoughts and actions. While the former studies how people weigh costs and benefits to make a decision, the latter studies how they adjust information processing to achieve their goals. Recent work has highlighted ways in which decision-making research can inform our understanding of cognitive control. Here, we provide the complementary perspective: how cognitive control research has informed understanding of decision-making. We highlight three particular areas of research where this critical interchange has occurred: (1) how different types of goals shape the evaluation of choice options, (2) how people use control to adjust how they make their decisions, and (3) how people monitor decisions to inform adjustments to control at multiple levels and timescales. We show how adopting this alternate viewpoint offers new insight into the determinants of both decisions and control; provides alternative interpretations for common neuroeconomic findings; and generates fruitful directions for future research.

Highlights

•  We review how taking a cognitive control perspective provides novel insights into the mechanisms of value based choice.

•  We highlight three areas of research where this critical interchange has occurred:

      (1) how different types of goals shape the evaluation of choice options,

      (2) how people use control to adjust how they make their decisions, and

      (3) how people monitor decisions to inform adjustments to control at multiple levels and timescales.

From Exerting Control Beyond Our Current Choice

We have so far discussed choices the way they are typically studied:in isolation. However, we don’t make choices in a vacuum, and our current choices depend on previous choices we have made (Erev & Roth, 2014; Keung, Hagen, & Wilson, 2019; Talluri et al., 2020; 618Urai, Braun, & Donner, 2017; Urai, de Gee, Tsetsos, & Donner, 2019). One natural way in which choices influence each other is through learning about the options, where the evaluations of the outcome of one choice refines the expected value (incorporating range and probability) assigned to that option in future choices (Fontanesi, Gluth, et al., 2019; Fontanesi, Palminteri, et al., 2019; Miletic et al., 2021).  Here we focus on a different, complementary way, central to cognitive control research, where evaluations of the process of ongoing and past choices inform the process of future choices(Botvinick et al., 1999; Bugg, Jacoby, & Chanani, 2011; Verguts, Vassena, & Silvetti, 2015). In cognitive control research, these choice evaluations and their influence on subsequent adaptation are studied under the umbrella of performance monitoring (Carter et al., 1998; Ullsperger, Fischer, Nigbur, & Endrass, 2014). Unlike option-based learning, performance monitoring influences not only which options are chosen, but also how subsequent choices are made. It also informs higher order decisions about strategy and task selection(Fig. 6305A).

Wednesday, February 16, 2022

AI ethics in computational psychiatry: From the neuroscience of consciousness to the ethics of consciousness

Wiese, W. and Friston, K.J.
Behavioural Brain Research
Volume 420, 26 February 2022, 113704

Abstract

Methods used in artificial intelligence (AI) overlap with methods used in computational psychiatry (CP). Hence, considerations from AI ethics are also relevant to ethical discussions of CP. Ethical issues include, among others, fairness and data ownership and protection. Apart from this, morally relevant issues also include potential transformative effects of applications of AI—for instance, with respect to how we conceive of autonomy and privacy. Similarly, successful applications of CP may have transformative effects on how we categorise and classify mental disorders and mental health. Since many mental disorders go along with disturbed conscious experiences, it is desirable that successful applications of CP improve our understanding of disorders involving disruptions in conscious experience. Here, we discuss prospects and pitfalls of transformative effects that CP may have on our understanding of mental disorders. In particular, we examine the concern that even successful applications of CP may fail to take all aspects of disordered conscious experiences into account.


Highlights

•  Considerations from AI ethics are also relevant to the ethics of computational psychiatry.

•  Ethical issues include, among others, fairness and data ownership and protection.

•  They also include potential transformative effects.

•  Computational psychiatry may transform conceptions of mental disorders and health.

•  Disordered conscious experiences may pose a particular challenge.

From the Discussion

At present, we are far from having a formal account of conscious experience. As mentioned in the introduction, many empirical theories of consciousness make competing claims, and there is still much uncertainty about the neural mechanisms that underwrite ordinary conscious processes (let alone psychopathology). Hence, the suggestion to foster research on the computational correlates of disordered conscious experiences should not be regarded as an invitation to ignore subjective reports. The patient’s perspective will continue to be central for normatively assessing their experienced condition. Computational models offer constructs to better describe and understand elusive aspects of a disordered conscious experience, but the patient will remain the primary authority on whether they are suffering from their condition. 

Tuesday, February 15, 2022

How do people use ‘killing’, ‘letting die’ and related bioethical concepts? Contrasting descriptive and normative hypotheses

Rodríguez-Arias, D., et al., (2009)
Bioethics 34(5)
DOI:10.1111/bioe.12707

Abstract

Bioethicists involved in end-of-life debates routinely distinguish between ‘killing’ and ‘letting die’. Meanwhile, previous work in cognitive science has revealed that when people characterize behaviour as either actively ‘doing’ or passively ‘allowing’, they do so not purely on descriptive grounds, but also as a function of the behaviour’s perceived morality. In the present report, we extend this line of research by examining how medical students and professionals (N = 184) and laypeople (N = 122) describe physicians’ behaviour in end-of-life scenarios. We show that the distinction between ‘ending’ a patient’s life and ‘allowing’ it to end arises from morally motivated causal selection. That is, when a patient wishes to die, her illness is treated as the cause of death and the doctor is seen as merely allowing her life to end. In contrast, when a patient does not wish to die, the doctor’s behaviour is treated as the cause of death and, consequently, the doctor is described as ending the patient’s life. This effect emerged regardless of whether the doctor’s behaviour was omissive (as in withholding treatment) or commissive (as in applying a lethal injection). In other words, patient consent shapes causal selection in end-of-life situations, and in turn determines whether physicians are seen as ‘killing’ patients, or merely as ‘enabling’ their death.

From the Discussion

Across three  cases of  end-of-life  intervention, we find  convergent evidence  that moral  appraisals shape behavior description (Cushman et al., 2008) and causal selection (Alicke, 1992; Kominsky et al., 2015). Consistent  with  the  deontic  hypothesis,  physicians  who  behaved  according  to  patients’  wishes  were described as allowing the patient’s life to end. In contrast, physicians who disregarded the patient’s wishes were  described  as  ending the  patient’s  life.  Additionally,  patient  consent  appeared  to  inform  causal selection: The doctor was seen as the cause of death when disregarding the patient’s will; but the illness was seen as the cause of death when the doctor had obeyed the patient’s will.

Whether the physician’s behavior was omissive or commissive did not play a comparable role in behavior description or causal  selection. First, these  effects were weaker  than those of patient consent. Second,  while the  effects  of  consent  generalized to  medical  students  and  professionals,  the  effects of commission arose only among lay respondents. In other words, medical students and professionals treated patient consent as the sole basis for the doing/allowing distinction.  

Taken together, these  results confirm that  doing and  allowing serve a  fundamentally evaluative purpose (in  line with  the deontic  hypothesis,  and Cushman  et al.,  2008),  and only  secondarily serve  a descriptive purpose, if at all. 

Monday, February 14, 2022

Beauty Goes Down to the Core: Attractiveness Biases Moral Character Attributions

Klebl, C., Rhee, J.J., Greenaway, K.H. et al. 
J Nonverbal Behav (2021). 
https://doi.org/10.1007/s10919-021-00388-w

Abstract

Physical attractiveness is a heuristic that is often used as an indicator of desirable traits. In two studies (N = 1254), we tested whether facial attractiveness leads to a selective bias in attributing moral character—which is paramount in person perception—over non-moral traits. We argue that because people are motivated to assess socially important traits quickly, these may be the traits that are most strongly biased by physical attractiveness. In Study 1, we found that people attributed more moral traits to attractive than unattractive people, an effect that was stronger than the tendency to attribute positive non-moral traits to attractive (vs. unattractive) people. In Study 2, we conceptually replicated the findings while matching traits on perceived warmth. The findings suggest that the Beauty-is-Good stereotype particularly skews in favor of the attribution of moral traits. As such, physical attractiveness biases the perceptions of others even more fundamentally than previously understood.

From the Discussion

The present investigation advances the Beauty-is-Good stereotype literature. Our findings are consistent with extensive research showing that people attribute positive traits more strongly to attractive compared to unattractive individuals (Dion et al., 1972). Most significantly, the present studies add to the previous literature by providing evidence that attractiveness does not bias the attribution of positive traits uniformly. Attractiveness especially biases the attribution of moral traits compared to positive non-moral traits, constituting an update to the Beauty-is-Good stereotype. One possible explanation for this selective bias is that because people are particularly motivated to assess socially important traits—traits that help us quickly decide who our allies are (Goodwin et  al., 2014)—physical attractiveness selectively biases the attribution of those traits over socially less important traits. While in many instances, this may allow us to assess moral character quickly and accurately (cf. Ambady et al., 2000) and thus obtain valuable information about whether the target is a threat or ally, where morally relevant information is absent (such as during initial impression formation) this motivation to assess moral character may lead to an over reliance on heuristic cues. 

Sunday, February 13, 2022

Hit by the Virtual Trolley: When is Experimental Ethics Unethical?

Rueda, J. (2022).
ResearchGate.net

Abstract

The  trolley  problem  is  one  of  the  liveliest  research  frameworks  in experimental  ethics. In  the last  decade, social  neuroscience  and experimental  moral psychology  have  gone  beyond  the  studies  with  mere text-based  hypothetical  moral dilemmas. In this article, I present the rationale behind testing the actual behaviour in more realistic scenarios  through Virtual Reality and summarize the body of evidence raised by the experiments with virtual trolley scenarios. Then, I approach the argument of Ramirez and LaBarge (2020), who claim that the virtual simulation of the Footbridge version  of  the  trolley  dilemma  is  an  unethical  research  practice,  and  I  raise  some objections to it. Finally, I provide some reflections about the means and ends of trolley-like scenarios and other sacrificial dilemmas in experimental ethics.

(cut)

From Rethinking the Means and Ends of Trolleyology

The first response states that these studies have no normative relevance at all. A traditional objection to the trolley dilemma pointed to the artificiality of the scenario and its normative uselessness in translating to real contemporary problems (see, for instance, Midgley, cited in Edmonds, 2014, p. 100-101). We have already seen that this is not true. Indeed, the existence of real dilemmas that share structural similarities with hypothetical trolley scenarios makes it  practically useful to test our intuitions on them (Edmonds, 2014). Besides that, a more sophisticated objection claims that intuitive responses to the trolley problem have no ethical value because intuitions are quite unreliable. Cognitive science has frequently shown how fallible, illogical, biased, and irrational many of our intuitive preferences can be. In fact, moral intuitions in text-based trolley dilemmas are subject to morally irrelevant factors such as order (Liao et al., 2012), frame (Cao et al., 2017), or mood (Pastötter et al., 2013). However, the fact that there are wrong or biased intuitions  does  not  mean  that  intuitions  do not  have any  epistemic or  moral  value. Dismissing intuitions because they are subject to implicit psychological factors in favour of armchair ethical theorizing is inconsistent. Empirical evidence should play a role in normative theorizing on trolley dilemmas as long as ethical theorizing is also subject to implicit  psychological  factors—and  which  experimental  research  can  help  to  make explicit (Kahane, 2013).  

The second option states that what should be done as public policy on sacrificial dilemmas is what the majority of people say or do in those situations. In other words, the descriptive results of the experiments show us how we should act at the normative level. Consider the following example from the debate of self-driving vehicles: “We thus argue that any implementation of an ethical decision-making system for a specific context should be based on human decisions made in the same context” (Sütfeld et al., 2017). So, as most people act in a utilitarian way in VR simulations of traffic dilemmas, autonomous cars should act similarly in analogous situations (Sütfeld et al. 2017).

Saturday, February 12, 2022

Privacy and digital ethics after the pandemic

Carissa Véliz
Nature Electronics
VOL 4 | January 2022, 10, 11.

The coronavirus pandemic has permanently changed our relationship with technology, accelerating the drive towards digitization. While this change has brought advantages, such as increased opportunities to work from home and innovations in e-commerce, it has also been accompanied with steep drawbacks,
which include an increase in inequality and undesirable power dynamics.

Power asymmetries in the digital age have been a worry since big tech became big.  Technophiles have often argued that if users are unhappy about online services, they can always opt-out. But opting-out has not felt like a meaningful alternative for years for at least two reasons.  

First, the cost of not using certain services can amount to a competitive disadvantage — from not seeing a job advert to not having access to useful tools being used by colleagues. When a platform becomes too dominant, asking people not to use it is like asking them to refrain from being full participants in society. Second, platforms such as Facebook and Google are unavoidable — no one who has an online life can realistically steer clear of them. Google ads and their trackers creep throughout much of the Internet, and Facebook has shadow profiles on netizens even when they have never had an account on the platform.

(cut)

Reasons for optimism

Despite the concerning trends regarding privacy and digital ethics during the pandemic, there are reasons to be cautiously optimistic about the future.  First, citizens around the world are increasingly suspicious of tech companies, and are gradually demanding more from them. Second, there is a growing awareness that the lack of privacy ingrained in current apps entails a national security risk, which can motivate governments into action. Third, US President Joe Biden seems eager to collaborate with the international community, in contrast to his predecessor. Fourth, regulators in the US are seriously investigating how to curtail tech’s power, as evidenced by the Department of Justice’s antitrust lawsuit against Google and the Federal Trade Commission’s (FTC) antitrust lawsuit against Facebook.  Amazon and YouTube have also been targeted by the FTC for a privacy investigation. With discussions of a federal privacy law becoming more common in the US, it would not be surprising to see such a development in the next few years. Tech regulation in the US could have significant ripple effects elsewhere.

Friday, February 11, 2022

Social Neuro AI: Social Interaction As the "Dark Matter" of AI

S. Bolotta & G. Dumas
arxiv.org
Originally published 4 JAN 22

Abstract

We are making the case that empirical results from social psychology and social neuroscience along with the framework of dynamics can be of inspiration to the development of more intelligent artificial agents. We specifically argue that the complex human cognitive architecture owes a large portion of its expressive power to its ability to engage in social and cultural learning. In the first section, we aim at demonstrating that social learning plays a key role in the development of intelligence. We do so by discussing social and cultural learning theories and investigating the abilities that various animals have at learning from others; we also explore findings from social neuroscience that examine human brains during social interaction and learning. Then, we discuss three proposed lines of research that fall under the umbrella of Social NeuroAI and can contribute to developing socially intelligent embodied agents in complex environments. First, neuroscientific theories of cognitive architecture, such as the global workspace theory and the attention schema theory, can enhance biological plausibility and help us understand how we could bridge individual and social theories of intelligence. Second, intelligence occurs in time as opposed to over time, and this is naturally incorporated by the powerful framework offered by dynamics. Third, social embodiment has been demonstrated to provide social interactions between virtual agents and humans with a more sophisticated array of communicative signals. To conclude, we provide a new perspective on the field of multiagent robot systems, exploring how it can advance by following the aforementioned three axes.

Conclusion

At the crossroads of robotics, computer science, and psychology, one of the main challenges for humans is to build autonomous agents capable of participating in cooperative social interactions. This is important not only because AI will play a crucial role in our daily life, but also because, as demonstrated by results in social neuroscience and evolutionary psychology, intrapersonal intelligence is tightly connected with interpersonal intelligence, especially in humans Dumas et al. [2014a]. In this opinion article, we have attempted to unify the lines of research that, at the moment, are separated from each other; in particular, we have proposed three research directions that are expected to enhance efficient exchange of information between agents and, as a consequence, individual intelligence (especially in out-of-distribution generalization: OOD). This would contribute to creating agents that not only do have humanlike OOD skills, but are also able to exhibit such skills in extremely complex and realistic environments Dennis et al.
[2021], while interacting with other embodied agents and with humans.


Thursday, February 10, 2022

Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them

Santoni de Sio, F., Mecacci, G. 
Philos. Technol. 34, 1057–1084 (2021). 
https://doi.org/10.1007/s13347-021-00450-x

Abstract

The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems – gaps in culpability, moral and public accountability, active responsibility—caused by different sources, some technical, other organisational, legal, ethical, and societal. Responsibility gaps may also happen with non-learning systems. The paper clarifies which aspect of AI may cause which gap in which form of responsibility, and why each of these gaps matter. It proposes a critical review of partial and non-satisfactory attempts to address the responsibility gap: those which present it as a new and intractable problem (“fatalism”), those which dismiss it as a false problem (“deflationism”), and those which reduce it to only one of its dimensions or sources and/or present it as a problem that can be solved by simply introducing new technical and/or legal tools (“solutionism”). The paper also outlines a more comprehensive approach to address the responsibility gaps with AI in their entirety, based on the idea of designing socio-technical systems for “meaningful human control", that is systems aligned with the relevant human reasons and capacities.

(cut)

The Tracing Conditions and its Payoffs for Responsibility

Unlike proposals based on new forms of legal liability, MHC (Meaningful Human Control) proposes that socio-technical systems are also systematically designed to avoid gaps in moral culpability, accountability, and active responsibility. The “tracing condition” proposes that a system can remain under MHC only in the presence of a solid alignment between the system and the technical, motivational, moral capacities of the relevant agents involved, with different roles, in the design, control, and use of the system. The direct goal of this condition is promoting a fair distribution of moral culpability, thereby avoiding two undesired results: first, scapegoating, i.e. agents being held culpable without having a fair capacity to avoid wrongdoing (Elish, 2019): in the example of the automated driving systems above, for instance, the drivers’ relevant technical and motivational capacities not being sufficiently studied and trained. Second, impunity for avoidable accidents, i.e. culpability gaps: the impossibility to legitimately blame anybody as no individual agent possesses all the relevant capacities, e.g. the managers/designers having the technical capacity but not the moral motivation to avoid accidents and the drivers having the motivation but not the skills. The tracing condition also helps addressing accountability and active responsibility gaps. If a person or organisation should be morally or publicly accountable, then they must also possess the specific capacity to discharge this duty: according to another example discussed above, if a doctor has to remain accountable to their patients for her decisions, then she should maintain the capacity and motivation to understand the functioning of the AI system she uses and to explain her decision to the patients.

Wednesday, February 9, 2022

How FDA Failures Contributed to the Opioid Crisis

Andrew Kolodny, MD
AMA J Ethics. 2020;22(8):E743-750. 
doi: 10.1001/amajethics.2020.743.

Abstract

Over the past 25 years, pharmaceutical companies deceptively promoted opioid use in ways that were often neither safe nor effective, contributing to unprecedented increases in prescribing, opioid use disorder, and deaths by overdose. This article explores regulatory mistakes made by the US Food and Drug Administration (FDA) in approving and labeling new analgesics. By understanding and correcting these mistakes, future public health crises caused by improper pharmaceutical marketing might be prevented.

Introduction

In the United States, opioid use disorder (OUD) and opioid overdose were once rare. But over the past 25 years, the number of Americans suffering from OUD increased exponentially and in parallel with an unprecedented increase in opioid prescribing. Today, OUD is common, especially in patients with chronic pain treated with opioid analgesics, and opioid overdose is the leading cause of accidental death.

(cut)

Oversight Recommendations

While fewer clinicians are initiating long-term opioids, overprescribing is still a problem. According to a recently published report, more than 2.9 million people initiated opioid use in December 2017. The FDA’s continued approval of new opioids exacerbates this problem. Each time a branded opioid hits the market, the company, eager for return on its investment, is given an incentive and, in essence, a license to promote aggressive prescribing. The FDA’s continued approval of new opioids pits the financial interests of drug companies against city, state, and federal efforts to discourage initiation of long-term opioids.

To finally end the opioid crisis, the FDA must enforce the Food, Drug, and Cosmetic Act, and it must act on recommendations from the NAS for an overhaul of its opioid approval and removal policies. The broad indication on opioid labels must be narrowed, and an explicit warning against long-term use and high-dose prescribing should be added. The label should reinforce, rather than contradict, guidance from the CDC, the Department of Veterans Affairs, the Agency for Healthcare Research and Quality, and other public health agencies that are calling for more cautious prescribing.

Tuesday, February 8, 2022

Can Conspiracy Beliefs Be Beneficial? Longitudinal Linkages Between Conspiracy Beliefs, Anxiety, Uncertainty Aversion, and Existential Threat

Liekefett, L., Christ, O., & Becker, J. C. (2022). 
Personality and Social Psychology Bulletin. 
https://doi.org/10.1177/01461672211060965

Abstract

Research suggests that conspiracy beliefs are adopted because they promise to reduce anxiety, uncertainty, and threat. However, little research has investigated whether conspiracy beliefs actually fulfill these promises. We conducted two longitudinal studies (N Study 1 = 405, N Study 2 = 1,012) to examine how conspiracy beliefs result from, and in turn influence, anxiety, uncertainty aversion, and existential threat. Random intercept cross-lagged panel analyses indicate that people who were, on average, more anxious, uncertainty averse, and existentially threatened held stronger conspiracy beliefs. Increases in conspiracy beliefs were either unrelated to changes in anxiety, uncertainty aversion, and existential threat (Study 2), or even predicted increases in these variables (Study 1). In both studies, increases in conspiracy beliefs predicted subsequent increases in conspiracy beliefs, suggesting a self-reinforcing circle. We conclude that conspiracy beliefs likely do not have beneficial consequences, but may even reinforce the negative experience of anxiety, uncertainty aversion, and existential threat.

From the General Discussion

Are conspiracy beliefs beneficial or harmful for the individual?

In both studies, within-person increases in conspiracy beliefs did not predict reduced anxiety, uncertainty aversion, and existential threat. Increases in conspiracy beliefs were either unrelated to changes in these variables (Study 2) or even predicted increases in uncertainty aversion, anxiety, and existential threat (Study 1). This indicates that conspiracy beliefs are likely not beneficial in this regard. However, we cannot answer conclusively whether conspiracy beliefs, instead, reinforce the negative experience of anxiety, uncertainty, and threat: We observed these harmful effects only in Study 1. It may be that the time intervals in Study 2 were too long to observe these effects. It has been argued that the optimal time intervals to observe longitudinal relations are relatively short, especially for within-person effects (Dormann & Griffin, 2015), and that effect sizes typically decrease as time intervals get larger (Atkinson et al., 2000; Cohen, 1993; Dormann & Griffin, 2015; Hulin et al., 1990). This may explain why we observed only few within-person associations in Study 2.

We did not find within-person consequences of coronavirus-related conspiracy beliefs in Study 2. This may be due not only to long time intervals, but also to opposing effects that cancel each other out: Most coronavirus conspiracy beliefs contain some element that downplays the dangers of the virus, which might relieve distress. Yet, most of them also describe threatening scenarios of malevolent, secret forces, which should increase distress.

We revealed an additional way in which conspiracy beliefs may be harmful for the individual: Both studies found that increases in conspiracy beliefs predicted even further increases in conspiracy beliefs at the next measurement wave. This effect emerged for both short- and long-term distances, and indicates that conspiracy beliefs are part of a self-reinforcing cycle that results in more and more extreme attitudes (Goertzel, 1994; Swami et al., 2010; Wood et al., 2012).

Monday, February 7, 2022

On loving thyself: Exploring the association between self-compassion, self-reported suicidal behaviors, and implicit suicidality among college students

Zeifman, R. J., Ip, J., Antony, M. M., & Kuo, J. R. 
(2021). Journal of American college health
J of ACH, 69(4), 396–403.

Abstract

Suicide is a major public health concern. It is unknown whether self-compassion is associated with suicide risk above and beyond suicide risk factors such as self-criticism, hopelessness, and depression severity. 

Participants: Participants were 130 ethnically diverse undergraduate college students. 

Methods: Participants completed self-report measures of self-compassion, self-criticism, hopelessness, depression severity, and suicidal behaviors, as well as an implicit measure of suicidality. 

Results: Self-compassion was significantly associated with self-reported suicidal behaviors, even when controlling for self-criticism, hopelessness, and depression severity. Self-compassion was not significantly associated with implicit suicidality. 

Conclusions: The findings suggest that self-compassion is uniquely associated with self-reported suicidal behaviors, but not implicit suicidality, and that self-compassion is a potentially important target in suicide risk interventions. Limitations and future research directions are discussed.

Discussion

Clinical implications

Our findings suggest that self-criticism and self-compassion are uniquely predictive of self-reported suicidal behaviors.  Therefore, in addition to the importance of targeting self-criticism, self-compassion may also be an important, and independent, target within suicide risk interventions. Indeed, qualitative analysis of interviews conducted with individuals with borderline personality disorder (a psychiatric disorder characterized by high levels of suicide risk) and their service providers, identified self-compassion as an important theme in the process of recovery.  Interventions that specifically focus on fostering self-compassion, by generating feelings of self-reassurance, warmth, and self-soothing, include compassion-focused therapy and mindful self-compassion. Compassion based interventions have shown promise for a wide range of populations, including eating disorders, psychotic disorders, personality disorders, and healthy individuals.

Sunday, February 6, 2022

Trolley Dilemma in Papua. Yali horticulturalists refuse to pull the lever

Sorokowski, P., Marczak, M., Misiak, M. et al. 
Psychon Bull Rev 27, 398–403 (2020).

Abstract

Although many studies show cultural or ecological variability in moral judgments, cross-cultural responses to the trolley problem (kill one person to save five others) indicate that certain moral principles might be prevalent in human populations. We conducted a study in a traditional, indigenous, non-Western society inhabiting the remote Yalimo valley in Papua, Indonesia. We modified the original trolley dilemma to produce an ecologically valid “falling tree dilemma.” Our experiment showed that the Yali are significantly less willing than Western people to sacrifice one person to save five others in this moral dilemma. The results indicate that utilitarian moral judgments to the trolley dilemma might be less widespread than previously supposed. On the contrary, they are likely to be mediated by sociocultural factors.

Discussion

Our study showed that Yali participants were significantly less willing than Western participants to sacrifice one person to save five others in the moral dilemma. More specifically, the difference was so large that the odds of pushing the tree were approximately 73% smaller for a Papuan in comparison with Canadians.

Our findings reflect cultural differences between the Western and Yali participants, which are illustrated by the two most common explanations provided by Papuans immediately after the experiment. First, owing to the extremely harsh consequences of causing someone’s death in Yali society, our Papuan participants did not want to expose themselves to any potential trouble and were, therefore, unwilling to take any action in the tree dilemma. The rules of conduct in Yali society mean that a person accused of contributing to someone’s death is killed. However, the whole extended family of the blamed individual, and even their village, are also in danger of death (Koch, 1974). This is because the relatives of the deceased person are obliged to compensate for the wrongdoing by killing the same or a greater number of persons.

Another common explanation was related to religion. The Yali often argued that people should not interfere with the divine decision about someone’s life and death (e.g., “I’m not God, so I can’t make the decision”). Hence, although the reason may suggest an action as appropriate, religion suggests otherwise, with religious believers deciding in favor of the latter (Piazza & Landy, 2013). In turn, more traditional populations may refer to religion more than more secular, modern WEIRD populations. 

Saturday, February 5, 2022

Can Brain Organoids Be ‘Conscious’? Scientists May Soon Find Out

Anil Seth
Wired.com
Originally posted 20 DEC 21

Here is an excerpt:

The challenge here is that we are still not sure how to define consciousness in a fully formed human brain, let alone in a small cluster of cells grown in a lab. But there are some promising avenues to explore. One prominent candidate for a brain signature of consciousness is its response to a perturbation. If you stimulate a conscious brain with a pulse of energy, the electrical echo will reverberate in complex patterns over time and space. Do the same thing to an unconscious brain and the echo will be very simple—like throwing a stone into still water. The neuroscientist Marcello Massimini and his team at the University of Milan have used this discovery to detect residual or “covert” consciousness in behaviorally unresponsive patients with severe brain injury. What happens to brain organoids when stimulated this way remains unknown—and it is not yet clear how the results might be interpreted.

As brain organoids develop increasingly similar dynamics to those observed in conscious human brains, we will have to reconsider both what we take to be reliable brain signatures of consciousness in humans, and what criteria we might adopt to ascribe consciousness to something made not born.

The ethical implications of this are obvious. A conscious organoid might consciously suffer and we may never recognize its suffering since it cannot express anything.

Friday, February 4, 2022

Latent motives guide structure learning during adaptive social choice

van Baar, J.M., Nassar, M.R., Deng, W. et al.
Nat Hum Behav (2021). 
https://doi.org/10.1038/s41562-021-01207-4

Abstract

Predicting the behaviour of others is an essential part of social cognition. Despite its ubiquity, social prediction poses a poorly understood generalization problem: we cannot assume that others will repeat past behaviour in new settings or that their future actions are entirely unrelated to the past. We demonstrate that humans solve this challenge using a structure learning mechanism that uncovers other people’s latent, unobservable motives, such as greed and risk aversion. In four studies, participants (N = 501) predicted other players’ decisions across four economic games, each with different social tensions (for example, Prisoner’s Dilemma and Stag Hunt). Participants achieved accurate social prediction by learning the stable motivational structure underlying a player’s changing actions across games. This motive-based abstraction enabled participants to attend to information diagnostic of the player’s next move and disregard irrelevant contextual cues. Participants who successfully learned another’s motives were more strategic in a subsequent competitive interaction with that player in entirely new contexts, reflecting that social structure learning supports adaptive social behaviour.

Significance statement

A hallmark of human cognition is being able to predict the behavior of others. How do we achieve social prediction given that we routinely encounter others in a dizzying array of social situations? We find people achieve accurate social prediction by inferring another’s hidden motives—motives that do not necessarily have a one-to-one correspondence with observable behaviors. Participants were able to infer another’s motives using a structure learning mechanism that enabled generalization.  Individuals used what they learned about others in one setting to predict their actions in an entirely new setting. This cognitive process can explain a wealth of social behaviors, ranging from strategic economic decisions to stereotyping and racial bias.

From the Discussion

How do people construct and apply abstracted mental models of others’ motives? Our data suggest that attention plays a key role in guiding this process. Attention is a fundamental cognitive mechanism as it affords optimal access to behaviorally relevant information with limited processing capacity. Our findings show how attention supports social prediction. In the Social Prediction Game, as in everyday social interactions, there were multiple cues that could be predictive of another’s behavior, from the player payoffs S and T to the order of the games or even the initials of the player. Structure learning allowed participants to disregard superficial cues and attend to information relevant to the players’ latent motives. Although this process facilitated accurate social prediction with limited effort if the inferred motives were correct, incorrect structure learning caused counterproductive attention on irrelevant information. For example, participants who did not consider risk aversion failed to shift their attention to the sucker’s payoff (S) during the Pessimist block and instead kept looking at the temptation to defect (T), thereby missing out on information predictive of the player’s choices. This suggests that what we can learn about other people is limited by our expectations.

Thursday, February 3, 2022

Neural computations in children’s third-party interventions are modulated by their parents’ moral values

Kim, M., Decety, J., Wu, L. et al.
npj Sci. Learn. 6, 38 (2021). 
https://doi.org/10.1038/s41539-021-00116-5

Abstract

One means by which humans maintain social cooperation is through intervention in third-party transgressions, a behaviour observable from the early years of development. While it has been argued that pre-school age children’s intervention behaviour is driven by normative understandings, there is scepticism regarding this claim. There is also little consensus regarding the underlying mechanisms and motives that initially drive intervention behaviours in pre-school children. To elucidate the neural computations of moral norm violation associated with young children’s intervention into third-party transgression, forty-seven preschoolers (average age 53.92 months) participated in a study comprising of electroencephalographic (EEG) measurements, a live interaction experiment, and a parent survey about moral values. This study provides data indicating that early implicit evaluations, rather than late deliberative processes, are implicated in a child’s spontaneous intervention into third-party harm. Moreover, our findings suggest that parents’ values about justice influence their children’s early neural responses to third-party harm and their overt costly intervention behaviour.

From the Discussion

Our study further provides evidence that children, as young as 3 years of age, can enact costly third-party intervention by protesting and reporting. Previous research has shown that young children from age 3 enact third-party punishment to transgressors shown in video or puppets9,10. In the present study, in the context of real-life transgression experiment, even the youngest participant (41 months old) engaged in costly intervention, by hinting disapproval to the adult transgressor (why are you doing that?) and subsequently reporting the damage when being prompted. During the experiment, confounding factors such as a sense of ‘responsibility’, were avoided by keeping the person playing the ‘research assistant’ role out of the room when the transgression occurred. Furthermore, when leaving the room, the ‘research assistant’ did not assign the children any special role to police or monitor the actions of the ‘visitor’ (who would transgress). Moreover, the transgressor was not an acquaintance of the child, and the book was said to belong to a university (not a child’s school nor researchers), hence giving little sense of in-group/out-group membership11,60. Also, the participating children would likely attribute ‘power’ and ‘authority’ to the visitor/transgressor, as an adult26. Nevertheless, in the real-life experimental context, 34.8% of children explicitly protested to the adult wrong-doer.

(cut)

It should be emphasized that parent’s cognitive empathy was not implicated in the child’s neural computations of moral norms or their spontaneous intervention behaviour. However, parents’ cognitive empathy had a positive correlation with a child’s effortful control and their subsequent report behaviour. This distinct contribution made by two different dispositions (cognitive empathy and justice sensitivity) suggests that parenting strategies necessary to enhance a child’s moral development require both aspects: perspective-taking and understanding of moral values. 

Wednesday, February 2, 2022

Psychopathy and Moral-Dilemma Judgment: An Analysis Using the Four-Factor Model of Psychopathy and the CNI Model of Moral Decision-Making

Luke, D. M., Neumann, C. S., & Gawronski, B.
(2021). Clinical Psychological Science. 
https://doi.org/10.1177/21677026211043862

Abstract

A major question in clinical and moral psychology concerns the nature of the commonly presumed association between psychopathy and moral judgment. In the current preregistered study (N = 443), we aimed to address this question by examining the relation between psychopathy and responses to moral dilemmas pitting consequences for the greater good against adherence to moral norms. To provide more nuanced insights, we measured four distinct facets of psychopathy and used the CNI model to quantify sensitivity to consequences (C), sensitivity to moral norms (N), and general preference for inaction over action (I) in responses to moral dilemmas. Psychopathy was associated with a weaker sensitivity to moral norms, which showed unique links to the interpersonal and affective facets of psychopathy. Psychopathy did not show reliable associations with either sensitivity to consequences or general preference for inaction over action. Implications of these findings for clinical and moral psychology are discussed.

From the Discussion

In support of our hypotheses, general psychopathy scores and a superordinate latent variable (representing the broad syndrome of psychopathy) showed significant negative relations with sensitivity to moral norms, which suggests that people with elevated psychopathic traits were less sensitive to moral norms in their responses to moral dilemmas in comparison with other people. Further analyses at the facet level suggested that sensitivity to moral norms was uniquely associated with the interpersonal-affective facets of psychopathy. Both of these findings persisted when controlling for gender. As predicted, the antisocial facet showed a negative zero-order correlation with sensitivity to moral norms, but this association fell to nonsignificance when controlling for other facets of psychopathy and gender. At the manifest variable level, neither general psychopathy scores nor the four facets showed reliable relations with either sensitivity to consequences or general preference for inaction over action.

(cut)

More broadly, the current findings have important implications for both clinical and moral psychology. For clinical psychology, our findings speak to ongoing questions about whether people with elevated levels of psychopathy exhibit disturbances in moral judgment. In a recent review of the literature on psychopathy and moral judgment, Larsen et al. (2020) claimed there was “no consistent, well-replicated evidence of observable deficits in . . . moral judgment” (p. 305). However, a notable limitation of this review is that its analysis of moral-dilemma research focused exclusively on studies that used the traditional approach. Consistent with past research using the CNI model (e.g., Gawronski et al., 2017; Körner et al., 2020; Luke & Gawronski, 2021a) and in contrast to Larsen et al.’s conclusion, the current findings indicate substantial deviations in moral-dilemma judgments among people with elevated psychopathic traits, particularly conformity to moral norms.