Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, October 23, 2021

Decision fatigue: Why it’s so hard to make up your mind these days, and how to make it easier

Stacy Colino
The Washington Post
Originally posted 22 Sept 21

Here is an excerpt:

Decision fatigue is more than just a feeling; it stems in part from changes in brain function. Research using functional magnetic resonance imaging has shown that there’s a sweet spot for brain function when it comes to making choices: When people were asked to choose from sets of six, 12 or 24 items, activity was highest in the striatum and the anterior cingulate cortex — both of which coordinate various aspects of cognition, including decision-making and impulse control — when the people faced 12 choices, which was perceived as “the right amount.”

Decision fatigue may make it harder to exercise self-control when it comes to eating, drinking, exercising or shopping. “Depleted people become more passive, which becomes bad for their decision-making,” says Roy Baumeister, a professor of psychology at the University of Queensland in Australia and author of  “Willpower: Rediscovering the Greatest Human Strength.” “They can be more impulsive. They may feel emotions more strongly. And they’re more susceptible to bias and more likely to postpone decision-making.”

In laboratory studies, researchers asked people to choose from an array of consumer goods or college course options or to simply think about the same options without making choices. They found that the choice-makers later experienced reduced self-control, including less physical stamina, greater procrastination and lower performance on tasks involving math calculations; the choice-contemplators didn’t experience these depletions.

Having insufficient information about the choices at hand may influence people’s susceptibility to decision fatigue. Experiencing high levels of stress and general fatigue can, too, Bufka says. And if you believe that the choices you make say something about who you are as a person, that can ratchet up the pressure, increasing your chances of being vulnerable to decision fatigue.

The suggestions include:

1. Sleep well
2. Make some choice automatic
3. Enlist a choice advisor
4. Given expectations a reality check
5. Pace yourself
6. Pay attention to feelings

Friday, October 22, 2021

A Meta-Analytic Investigation of the Antecedents, Theoretical Correlates, and Consequences of Moral Disengagement at Work

Ogunfowora, B. T., et al. (2021)
The Journal of Applied Psychology
10.1037/apl0000912. 
Advance online publication. 
https://doi.org/10.1037/apl0000912

Abstract

Moral disengagement refers to a set of cognitive tactics people employ to sidestep moral self-regulatory processes that normally prevent wrongdoing. In this study, we present a comprehensive meta-analytic review of the nomological network of moral disengagement at work. First, we test its dispositional and contextual antecedents, theoretical correlates, and consequences, including ethics (workplace misconduct and organizational citizenship behaviors [OCBs]) and non-ethics outcomes (turnover intentions and task performance). Second, we examine Bandura's postulation that moral disengagement fosters misconduct by diminishing moral cognitions (moral awareness and moral judgment) and anticipatory moral self-condemning emotions (guilt). We also test a contrarian view that moral disengagement is limited in its capacity to effectively curtail moral emotions after wrongdoing. The results show that Honesty-Humility, guilt proneness, moral identity, trait empathy, conscientiousness, idealism, and relativism are key individual antecedents. Further, abusive supervision and perceived organizational politics are strong contextual enablers of moral disengagement, while ethical leadership and organizational justice are relatively weak deterrents. We also found that narcissism, Machiavellianism, psychopathy, and psychological entitlement are key theoretical correlates, although moral disengagement shows incremental validity over these "dark" traits. Next, moral disengagement was positively associated with workplace misconduct and turnover intentions, and negatively related to OCBs and task performance. Its positive impact on misconduct was mediated by lower moral awareness, moral judgment, and anticipated guilt. Interestingly, however, moral disengagement was positively related to guilt and shame post-misconduct. In sum, we find strong cumulative evidence for the pertinence of moral disengagement in the workplace.

From the Discussion

Our moderator analyses reveal several noteworthy findings. First, the relationship between moral disengagement and misconduct did not significantly differ depending on whether it is operationalized as a trait or state. This suggests that the impact of moral disengagement – at least with respect to workplace misconduct – is equally devastating when it is triggered in specific situations or when it is captured as a stable propensity. This provides initial support for conceptualizing moral disengagement along a continuum – from “one off” instances in specific contexts (i.e., state moral disengagement) to a “dynamic disposition” (Bandura, 1999b) that is relatively stable, but which may also shift in response to different situations (Moore et al., 2019).  

Second, there may be utility in exploring specific disengagement tactics. For instance, euphemistic labeling exerted stronger effects on misconduct compared to moral justification and diffusion of responsibility. Relative weight analyses further showed that some tactics contribute more to understanding misconduct and OCBs. Scholars have proposed that exploring moral disengagement tactics that match the specific context may offer new insights (Kish-Gephart et al., 2014; Moore et al., 2019). It is possible that moral justification might be critical in situations where participants must conjure up rationales to justify their misdeeds (Duffy et al., 2005), while diffusion of responsibility might matter more in team settings where morally disengaging employees can easily assign blame to the collective (Alnuaimi et al., 2010). These possibilities suggest that specific disengagement tactics may offer novel theoretical insights that may be overlooked when scholars focus on overall moral disengagement. However, we acknowledge that this conclusion is preliminary given the small number of studies available for these analyses. 

Thursday, October 21, 2021

How Disgust Affects Social Judgments

Inbar, Y., & Pizarro, D.
(2021, September 7). 

Abstract

The emotion of disgust has been claimed to affect a diverse array of social judgments, including moral condemnation, inter-group prejudice, political ideology, and much more. We attempt to make sense of this large and varied literature by reviewing the theory and research on how and why disgust influences these judgments. We first describe two very different perspectives adopted by researchers on why disgust should affect social judgment. The first is the pathogen-avoidance account, which sees the relationship between disgust and judgment as resulting from disgust’s evolved function as a pathogen-avoidance mechanism. The second is the extended disgust account, which posits that disgust functions much more broadly to address a range of other threats and challenges. We then review the empirical evidence to assess how well it supports each of these perspectives, arguing that there is more support for the pathogen-avoidance account than the extended account. We conclude with some testable empirical predictions that can better distinguish between these two perspectives.

Conclusion

We have described two very different perspectives on disgust that posit very different explanations for its role in social judgments. In our view, the evidence currently supports the pathogen-avoidance account over the extended-disgust alternative, but the question is best settled by future research explicitly designed to differentiate the two perspectives.

Wednesday, October 20, 2021

The Fight to Define When AI Is ‘High Risk’

Khari Johnson
wired.com
Originally posted 1 Sept 21

Here is an excerpt:

At the heart of much of that commentary is a debate over which kinds of AI should be considered high risk. The bill defines high risk as AI that can harm a person’s health or safety or infringe on fundamental rights guaranteed to EU citizens, like the right to life, the right to live free from discrimination, and the right to a fair trial. News headlines in the past few years demonstrate how these technologies, which have been largely unregulated, can cause harm. AI systems can lead to false arrests, negative health care outcomes, and mass surveillance, particularly for marginalized groups like Black people, women, religious minority groups, the LGBTQ community, people with disabilities, and those from lower economic classes. Without a legal mandate for businesses or governments to disclose when AI is used, individuals may not even realize the impact the technology is having on their lives.

The EU has often been at the forefront of regulating technology companies, such as on issues of competition and digital privacy. Like the EU's General Data Protection Regulation, the AI Act has the potential to shape policy beyond Europe’s borders. Democratic governments are beginning to create legal frameworks to govern how AI is used based on risk and rights. The question of what regulators define as high risk is sure to spark lobbying efforts from Brussels to London to Washington for years to come.

Tuesday, October 19, 2021

Why Empathy Is Not a Reliable Source of Information in Moral Decision Making

Decety, J. (2021).
Current Directions in Psychological Science. 
https://doi.org/10.1177/09637214211031943

Abstract

Although empathy drives prosocial behaviors, it is not always a reliable source of information in moral decision making. In this essay, I integrate evolutionary theory, behavioral economics, psychology, and social neuroscience to demonstrate why and how empathy is unconsciously and rapidly modulated by various social signals and situational factors. This theoretical framework explains why decision making that relies solely on empathy is not ideal and can, at times, erode ethical values. This perspective has social and societal implications and can be used to reduce cognitive biases and guide moral decisions.

From the Conclusion

Empathy can encourage overvaluing some people and ignoring others, and privileging one over many. Reasoning is therefore essential to filter and evaluate emotional responses that guide moral decisions. Understanding the ultimate causes and proximate mechanisms of empathy allows characterization of the kinds of signals that are prioritized and identification of situational factors that exacerbate empathic failure. Together, this knowledge is useful at a theoretical level, and additionally provides practical information about how to reframe situations to activate alternative evolved systems in ways that promote normative moral conduct compatible with current societal aspirations. This conceptual framework advances current understanding of the role of empathy in moral decision making and may inform efforts to correct personal biases. Becoming aware of one’s biases is not the most effective way to manage and mitigate them, but empathy is not something that can be ignored. It has an adaptive biological function, after all.

Monday, October 18, 2021

Artificial Pain May Induce Empathy, Morality, and Ethics in the Conscious Mind of Robots

M. Asada
Philosophies 2019, 4(3), 38
https://doi.org/10.3390/philosophies4030038

Abstract

In this paper, a working hypothesis is proposed that a nervous system for pain sensation is a key component for shaping the conscious minds of robots (artificial systems). In this article, this hypothesis is argued from several viewpoints towards its verification. A developmental process of empathy, morality, and ethics based on the mirror neuron system (MNS) that promotes the emergence of the concept of self (and others) scaffolds the emergence of artificial minds. Firstly, an outline of the ideological background on issues of the mind in a broad sense is shown, followed by the limitation of the current progress of artificial intelligence (AI), focusing on deep learning. Next, artificial pain is introduced, along with its architectures in the early stage of self-inflicted experiences of pain, and later, in the sharing stage of the pain between self and others. Then, cognitive developmental robotics (CDR) is revisited for two important concepts—physical embodiment and social interaction, both of which help to shape conscious minds. Following the working hypothesis, existing studies of CDR are briefly introduced and missing issues are indicated. Finally, the issue of how robots (artificial systems) could be moral agents is addressed.

Discussion

To tackle the issue of consciousness, this study attempted to represent it as a phenomenon of the developmental process of artificial empathy for pain and moral behavior generation. The conceptual model for the former is given by, while the latter is now a story of fantasy. If a robot is regarded as a moral being that is capable of exhibiting moral behavior with others, is it deserving of receiving moral behavior from them? If so, can we agree that such robots have conscious minds? This is an issue of ethics towards robots, and is also related to the legal system. Can we ask such robots to accept a sort of responsibility for any accident they commit? If so, how? These issues arise when we introduce robots who are qualified as a moral being with conscious minds into our society.

Before these issues can be considered, there are so many technical issues to address. Among them, the following should be addressed intensively.
  1. Associate the sensory discrimination of pain with the affective and motivational responses to pain (the construction of the pain matrix and memory dynamics).
  2. Recall the experience when a painful situation of others is observed.
  3. Generate appropriate behavior to reduce the pain.

Sunday, October 17, 2021

The Cognitive Science of Technology

D. Stout
Trends in Cognitive Sciences
Available online 4 August 2021

Abstract

Technology is central to human life but hard to define and study. This review synthesizes advances in fields from anthropology to evolutionary biology and neuroscience to propose an interdisciplinary cognitive science of technology. The foundation of this effort is an evolutionarily motivated definition of technology that highlights three key features: material production, social collaboration, and cultural reproduction. This broad scope respects the complexity of the subject but poses a challenge for theoretical unification. Addressing this challenge requires a comparative approach to reduce the diversity of real-world technological cognition to a smaller number of recurring processes and relationships. To this end, a synthetic perceptual-motor hypothesis (PMH) for the evolutionary–developmental–cultural construction of technological cognition is advanced as an initial target for investigation.

Highlights
  • Evolutionary theory and paleoanthropological/archaeological evidence motivate a theoretical definition of technology as socially reproduced and elaborated behavior involving the manipulation and modification of objects to enact changes in the physical environment.
  • This definition helps to resolve or obviate ongoing controversies in the anthropological, neuroscientific, and psychological literature relevant to technology.
  • A review of evidence from across these disciplines reveals that real-world technologies are diverse in detail but unified by the underlying demands and dynamics of material production. This creates opportunities for meaningful synthesis using a comparative method.
  • A ‘perceptual‐motor hypothesis’ proposes that technological cognition is constructed on biocultural evolutionary and developmental time scales from ancient primate systems for sensorimotor prediction and control.

Saturday, October 16, 2021

Social identity shapes antecedents and functional outcomes of moral emotion expression in online networks

Brady, W. J., & Van Bavel, J. J. 
(2021, April 2). 

Abstract

As social interactions increasingly occur through social media platforms, intergroup affective phenomena such as “outrage firestorms” and “cancel culture” have emerged with notable consequences for society. In this research, we examine how social identity shapes the antecedents and functional outcomes of moral emotion expression online. Across four pre-registered experiments (N = 1,712), we find robust evidence that the inclusion of moral-emotional expressions in political messages has a causal influence on intentions to share the messages on social media. We find that individual differences in the strength of partisan identification is a consistent predictor of sharing messages with moral-emotional expressions, but little evidence that brief manipulations of identity salience increased sharing. Negative moral emotion expression in social media messages also causes the message author to be perceived as more strongly identified among their partisan ingroup, but less open-minded and less worthy of conversation to outgroup members. These experiments highlight the role of social identity in affective phenomena in the digital age, and showcase how moral emotion expressions in online networks can serve ingroup reputation functions while at the same time hinder discourse between political groups.

Conclusion

In the context of contentious political conversations online, moral-emotional language causes political partisans to share the message more often, and that this effect was strongest in strong group identifiers. Expressing negative moral-emotional language in social media messages makes the message author appear more strongly identified with their group, but also makes outgroup members think the author is less open-minded and less worth of conversation. This work sheds light on antecedents and functional outcomes of moral-emotion expression in the digital age, which is becoming increasingly important to study as intergroup affective phenomena such as viral outrage and affective polarization are reaching historic levels.

Friday, October 15, 2021

The Ethics of Sex Robots

Sterri, A. B., & Earp, B. D. (in press).
In C. Véliz (ed.), The Oxford Handbook of 
Digital Ethics. Oxford:  Oxford University Press.

Abstract 

What, if anything, is wrong with having sex with a robot? For the sake of this chapter, the authors  will  assume  that  sexbots  are  ‘mere’  machines  that are  reliably  identifiable  as such, despite  their  human-like  appearance  and  behaviour.  Under  these  stipulations,  sexbots themselves can no more be harmed, morally speaking, than your dishwasher. However, there may still be something wrong about the production, distribution,  and use of such sexbots. In this  chapter,  the  authors  examine  whether  sex  with robots  is  intrinsically  or  instrumentally wrong  and  critically  assess  different  regulatory  responses.  They  defend  a  harm  reduction approach to  sexbot  regulation,  analogous  to  the  approach that has  been  considered  in  other areas, concerning, for example, drugs and sex work.

Conclusion  

Even  if  sexbots  never  become  sentient,  we  have  good  reasons  to  be  concerned with  their production, distribution, and use. Our seemingly  private activities have social meanings that we do not necessarily intend, but  which can be harmful to others. Sex  can both be  beautiful and  valuable—and  ugly  or  profoundly  harmful.  We  therefore  need  strong  ethical  norms  to guide human sexual behaviour, regardless of the existence of sexbots. Interaction with new technologies  could  plausibly  improve  our  sexual  relationships,  or  make things  worse  (see Nyholm et al. forthcoming, for a theoretical overview). In this chapter, we have explored some ways in which a harm reduction framework may have the potential to bring about the alleged benefits of sexbots with a minimum of associated harms. But whatever approach is taken, the goal should be to ensure that our relationships with robots conduce to, rather than detract from, the equitable flourishing of our fellow human beings.