Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, January 4, 2023

How social identity tunes moral cognition

Van Bavel, J. J., Packer, D.,  et al.
PsyArXiv.com (2022, November 18). 
https://doi.org/10.31234/osf.io/9efsb

Abstract

In this chapter, we move beyond the treatment of intuition and reason as competing systems and outline how social contexts, and especially social identities, allow people to flexibly “tune” their cognitive reactions to moral contexts—a process we refer to as ‘moral tuning’. Collective identities—identities based on shared group memberships—significantly influence judgments and decisions of many kinds, including in the moral domain. We explain why social identities influence all aspects of moral cognition, including processes traditionally classified as intuition and reasoning. We then explain how social identities tune preferences and goals, expectations, and what outcomes care about. Finally, we propose directions for future research in moral psychology.

Social Identities Tune Preferences and Goals

Morally-relevant situations often involve conflicts between choices about which the interests of different parties are in tension. Moral transgressions typically involve an agent putting their own desires ahead of the interests, needs, or rights of others, thus causing them harm (e.g., Gray et al., 2012), whereas acts worthy of moral praise usually involve an agent sacrificing self-interest for the sake of someone else’s or the greater good. Value-computation frameworks of cooperation model how much people weigh the interests of different parties (e.g., their own versus others’) in terms of social preferences (see Van Bavel et al., 2022). Social preference parameters can, for example, capture individual differences in how much people prioritize their own outcomes over others’ (e.g., pro-selfs versus pro-socials as indexed by social value orientation; Balliet et al., 2009). These preferences, along with social norms, inform the computations that underlie decisions to engage in selfish or pro-social behavior (Hackel, Wills &Van Bavel, 2020).

We argue that social identity also influences social preferences, such that people tend to care more about outcomes incurred by in-group than out-group members (Tajfel & Turner, 1979;Van Bavel & Packer, 2021). For instance, highly identified group members appear to experience vicarious reward when they observe in-group (but not out-group) members experiencing positiveoutcomes, as indexed by activity in ventral striatum, a brain region implicated in hedonic reward (Hackel et al., 2017). Intergroup competition may exacerbate differences in concern for in-group versus out-group targets, causing people to feel empathy when in-group targets experience negative outcomes, but schadenfreude (pleasure in others’ pain) when out-group members experience these same events (Cikara et al., 2014). Shared social identities can also lead people to put collective interests ahead of their own individual interests in social dilemmas. For instance, making collective identities salient causes selfish individuals to contribute more to theirgroup than when these same people were reminded of their individual self (De Cremer & Van Vugt, 1999). This shift in behavior was not necessarily because they were less selfish, but rather because their sense of self had shifted from the individual to the collective level.

(cut)

Conclusion

For centuries, philosophers and scientists have debated the role of emotional intuition and reason in moral judgment. Thanks to theoretical and methodological developments over the past few decades, we believe it is time to move beyond these debates. We argue that social identity can tune the intuitions and reasoning processes that underlie moral cognition (Van Bavel et al., 2015). Extensive research has found that social identities have a significant influence on social and moral judgment and decision-making (Oakes et al., 1994; Van Bavel & Packer, 2021). This approach offers an important complement to other theories of moral psychology and suggests a powerful way to shift moral judgments and decisions—by changing identities and norms, rather than hearts and minds.

Tuesday, January 3, 2023

Varieties of White working-class identity

Knowles, E., McDermott, M., & Richeson, J.
(2021, July 2).
https://doi.org/10.31234/osf.io/mjhdy

Abstract

The present work demonstrates that, contrary to popular political narratives, working-class White Americans are far from monolithic in their class identities, social attitudes, and political preferences. Latent profile analysis (LPA) is used to distinguish three types of identity in a nationally representative sample of working-class Whites: Working Class Patriots, who valorize responsibility, embrace national identity, and disparage the poor; Class Conflict Aware, who regard social class as a structural phenomenon and ascribe elitist attitudes to higher classes; and Working Class Connected, who embrace working-class identity, sympathize with the poor, and feel disrespected because of the work they do. This identity typology appears unique to working-class Whites and is associated with distinct patterns of attitudes regarding immigration, race, and politics, such that Class Conflict Aware and Working Class Connected Whites are considerably more progressive than are Working Class Patriots. Implications for electoral politics and race relations are discussed.

Discussion

Despite often being characterized as a monolithic social and political force, members of theWhite working class display considerable diversity in their intergroup attitudes and voting behavior(Smith & Hanley, 2018; Teixeira & Rogers, 2000; Tyson & Maniam, 2016). In an ethnographic study of working-class Whites in Kentucky, Missouri, and Indiana, McDermott and colleagues(2019) identified three identity types among White working-class interviewees:  Working ClassPatriots, who identity strongly as American, emphasize responsibility, disparage the poor, and report feeling respected in their jobs; Class Conflict Aware Whites, who see the working class as locked in a conflictual relationship with socioeconomic elites; and Working Class Connected Whites, who identify strongly as members of the working class, feel compassion toward the poor, and report feeling looked down on because of the work they do. These researchers found that the three identity types were associated with different patterns of social attitudes—with Patriots tending to disparage Black people and Latino immigrants, Conflict Aware Whites displaying progressive attitudes toward these groups, and Class Connected Whites exhibiting a combination of tolerant attitudes toward immigrants and hostile attitudes toward Black people.

The present research represents a quantitative extension of these qualitative findings. In a nationally representative sample of working-class (non–college-educated) White Americans, we measured five themes emerging from previous qualitative work: American identification, the value placed on responsibility, psychological distance from the poor, the belief in stark divisions between social classes, and the tendency to feel looked down on by members of higher classes. Latent profile analysis (LPA) was then used to assess whether the White American population contains discrete types resembling the Working Class Patriot, Class Conflict Aware, and Working Class Connected groups. Indeed, the best LPA solution yielded three identity types based on our five indicators, and these types could be readily matched to those found in McDermott et al.’s (2019) qualitative work(Figure 1a). The representation of the types in our survey sample broadly matched the breakdown in the ethnographic study—with Patriots making up the majority of respondents and the remaining sample split roughly between Class Conflict Aware and Working Class Connected Whites.


Psychologists need to understand that white working class culture is not monolithic, just like other cultures.

Monday, January 2, 2023

The hidden dark side of empowering leadership

Dennerlein, T., & Kirkman, B. L. (2022).
The Journal of applied psychology, 107(12), 2220–2242. https://doi.org/10.1037/apl0001013

Abstract

The majority of theory and research on empowering leadership to date has focused on how empowering leader behaviors influence employees, portraying those behaviors as almost exclusively beneficial. We depart from this predominant consensus to focus on the potential detriments of empowering leadership for employees. Drawing from the social cognitive theory of morality, we propose that empowering leadership can unintentionally increase employees' unethical pro-organizational behavior (UPB), and that it does so by increasing their levels of moral disengagement. Specifically, we propose that hindrance stressors create a reversing effect, such that empowering leadership increases (vs. decreases) moral disengagement when hindrance stressors are higher (vs. lower). Ultimately, we argue for a positive or negative indirect effect of empowering leadership on UPB through moral disengagement. We find support for our predictions in both a time-lagged field study (Study 1) and a scenario-based experiment using an anagram cheating task (Study 2). We thus highlight the impact that empowering leadership can have on unethical behavior, providing answers to both why and when the dark side of empowering leadership behavior occurs.

Managerial Implications

Leaders should be more aware of contextual features in the workplace before using empowering leadership. If employees are likely to experience hindrance stressors when empowered, leaders will need to either (a) use less empowering leadership or (b) reduce effects of hindrance stressors. Regarding the latter, leaders can become sponsors to remove obstacles impeding goal achievement.  If bureaucracy is preventing empowered employees from reaching their goals, leaders can reduce red tape to allow more freedom.Leaders can also engage with other leaders to organize a concerted effort to remove hindrance stressors. As noted, Conger and Kanungo’s (1988) theoretical model includes removing factors that lead to feelings of powerlessness—many of which pertain to hindrance stressors, such as a lack of role clarity, a bureaucratic climate, or high levels of formalization—as a first step in the empowerment process distinct from behaviors leaders use to empower employees. Yet, applications of empowering leadership often overlook this critical element (Argyris, 1998), which, based on our findings, is problematic. If hindrance stressors cannot be removed, leaders could help employees develop better coping strategies in the face of the frustration they are likely to experience when their goal achievement is thwarted. Coping strategies could include employee support groups, leadership development, or stress management techniques, such as mindfulness (Sutcliffe et al., 2016).


The full citation is lengthy, but here it is:

Dennerlein, T., & Kirkman, B. L. (2022). The hidden dark side of empowering leadership: The moderating role of hindrance stressors in explaining when empowering employees can promote moral disengagement and unethical pro-organizational behavior. The Journal of applied psychology, 107(12), 2220–2242. https://doi.org/10.1037/apl0001013

Sunday, January 1, 2023

The Central Role of Lifelong Learning & Humility in Clinical Psychology

Washburn, J. J., Teachman, B. A., et al. 
(2022). Clinical Psychological Science, 0(0).
https://doi.org/10.1177/21677026221101063

Abstract

Lifelong learning plays a central role in the lives of clinical psychologists. As psychological science advances and evidence-based practices develop, it is critical for clinical psychologists to not only maintain their competencies but to also evolve them. In this article, we discuss lifelong learning as a clinical, ethical, and scientific imperative in the myriad dimensions of the clinical psychologist’s professional life, arguing that experience alone is not sufficient. Attitude is also important in lifelong learning, and we call for clinical psychologists to adopt an intellectually humble stance and embrace “a beginner’s mind” when approaching new knowledge and skills. We further argue that clinical psychologists must maintain and refresh their critical-thinking skills and seek to minimize their biases, especially as they approach the challenges and opportunities of lifelong learning. We intend for this article to encourage psychologists to think differently about how they approach lifelong learning.

Here is an excerpt:

Schwartz (2008) was specifically referencing the importance of teaching graduate students to embrace what they do not know, viewing it as an opportunity instead of a threat. The same is true, perhaps even more so, for psychologists engaging in lifelong learning.

As psychologists progress in their careers, they are told repeatedly that they are experts in their field and sometimes THE expert in their own tiny subfield. Psychologists spend their days teaching others what they know and advising students how to make their own discoveries. But expertise is a double-edged sword. Of course, it serves psychologists well in that they are less likely to repeat past mistakes, but it is a disadvantage if they become too comfortable in their expert role. The Egyptian mathematician, Ptolemy, devised a system based on the notion that the sun revolved around the earth that guided astronomers for centuries until Copernicus proved him wrong. Although Newton devised the laws of physics, Einstein showed that the principles of Newtonian physics were wholly bound by context and only “right” within certain constraints. Science is inherently self-correcting, and the only thing that one can count on is that most of what people believe today will be shown to be wrong in the not-too-distant future. One of the authors (S. D. Hollon) recalls that the two things that he knew for sure coming out of graduate school was that neural tissues do not regenerate and that you cannot inherit acquired characteristics. It turns out that both are wrong. Lifelong learning and the science it is based on require psychologists to continuously challenge their expertise. Before becoming experts, psychologists often experience impostor phenomenon during education and training (Rokach & Boulazreg, 2020). Embracing the self-doubt that comes with feeling like an impostor can motivate lifelong learning, even for areas in which one feels like an expert. This means not only constantly learning about new topics but also recognizing that as psychologists tackle tough problems and their associated research questions, complex and often interdisciplinary approaches are required to develop meaningful answers. It is neither feasible nor desirable to become an expert in all domains. This means that psychologists need to routinely surround themselves with people who make them question or expand their expertise.

Here is the conclusion:

Lifelong learning should, like doctoral programs in clinical psychology, concentrate much more on thinking than training. Lifelong learning must encourage critical and independent thinking in the process of mastering relevant bodies of knowledge and the development of specific skills. Specifically, lifelong learning must reinforce the need for clinical psychologists to reflect carefully and critically on what they read, hear, and say and to think abstractly. Such abstract thinking is as relevant after one’s graduate career as before.

Saturday, December 31, 2022

AI’s fairness problem: understanding wrongful discrimination in the context of automated decision-making

Cossette-Lefebvre, H., Maclure, J. 
AI Ethics (2022).
https://doi.org/10.1007/s43681-022-00233-w

Abstract

The use of predictive machine learning algorithms is increasingly common to guide or even take decisions in both public and private settings. Their use is touted by some as a potentially useful method to avoid discriminatory decisions since they are, allegedly, neutral, objective, and can be evaluated in ways no human decisions can. By (fully or partly) outsourcing a decision process to an algorithm, it should allow human organizations to clearly define the parameters of the decision and to, in principle, remove human biases. Yet, in practice, the use of algorithms can still be the source of wrongful discriminatory decisions based on at least three of their features: the data-mining process and the categorizations they rely on can reconduct human biases, their automaticity and predictive design can lead them to rely on wrongful generalizations, and their opaque nature is at odds with democratic requirements. We highlight that the two latter aspects of algorithms and their significance for discrimination are too often overlooked in contemporary literature. Though these problems are not all insurmountable, we argue that it is necessary to clearly define the conditions under which a machine learning decision tool can be used. We identify and propose three main guidelines to properly constrain the deployment of machine learning algorithms in society: algorithms should be vetted to ensure that they do not unduly affect historically marginalized groups; they should not systematically override or replace human decision-making processes; and the decision reached using an algorithm should always be explainable and justifiable.

From the Conclusion

Given that ML algorithms are potentially harmful because they can compound and reproduce social inequalities, and that they rely on generalization disregarding individual autonomy, then their use should be strictly regulated. Yet, these potential problems do not necessarily entail that ML algorithms should never be used, at least from the perspective of anti-discrimination law. Rather, these points lead to the conclusion that their use should be carefully and strictly regulated. However, before identifying the principles which could guide regulation, it is important to highlight two things. First, the context and potential impact associated with the use of a particular algorithm should be considered. Anti-discrimination laws do not aim to protect from any instances of differential treatment or impact, but rather to protect and balance the rights of implicated parties when they conflict [18, 19]. Hence, using ML algorithms in situations where no rights are threatened would presumably be either acceptable or, at least, beyond the purview of anti-discriminatory regulations.

Second, one also needs to take into account how the algorithm is used and what place it occupies in the decision-making process. More precisely, it is clear from what was argued above that fully automated decisions, where a ML algorithm makes decisions with minimal or no human intervention in ethically high stakes situation—i.e., where individual rights are potentially threatened—are presumably illegitimate because they fail to treat individuals as separate and unique moral agents. To avoid objectionable generalization and to respect our democratic obligations towards each other, a human agent should make the final decision—in a meaningful way which goes beyond rubber-stamping—or a human agent should at least be in position to explain and justify the decision if a person affected by it asks for a revision. If this does not necessarily preclude the use of ML algorithms, it suggests that their use should be inscribed in a larger, human-centric, democratic process.

Friday, December 30, 2022

A new control problem? Humanoid robots, artificial intelligence, and the value of control

Nyholm, S. 
AI Ethics (2022).
https://doi.org/10.1007/s43681-022-00231-y

Abstract

The control problem related to robots and AI usually discussed is that we might lose control over advanced technologies. When authors like Nick Bostrom and Stuart Russell discuss this control problem, they write in a way that suggests that having as much control as possible is good while losing control is bad. In life in general, however, not all forms of control are unambiguously positive and unproblematic. Some forms—e.g. control over other persons—are ethically problematic. Other forms of control are positive, and perhaps even intrinsically good. For example, one form of control that many philosophers have argued is intrinsically good and a virtue is self-control. In this paper, I relate these questions about control and its value to different forms of robots and AI more generally. I argue that the more robots are made to resemble human beings, the more problematic it becomes—at least symbolically speaking—to want to exercise full control over these robots. After all, it is unethical for one human being to want to fully control another human being. Accordingly, it might be seen as problematic—viz. as representing something intrinsically bad—to want to create humanoid robots that we exercise complete control over. In contrast, if there are forms of AI such that control over them can be seen as a form of self-control, then this might be seen as a virtuous form of control. The “new control problem”, as I call it, is the question of under what circumstances retaining and exercising complete control over robots and AI is unambiguously ethically good.

From the Concluding Discussion section

Self-control is often valued as good in itself or as an aspect of things that are good in themselves, such as virtue, personal autonomy, and human dignity. In contrast, control over other persons is often seen as wrong and bad in itself. This means, I have argued, that if control over AI can sometimes be seen or conceptualized as a form of self-control, then control over AI can sometimes be not only instrumentally good, but in certain respects also good as an end in itself. It can be a form of extended self-control, and therefore a form of virtue, personal autonomy, or even human dignity.

In contrast, if there will ever be any AI systems that could properly be regarded as moral persons, then it would be ethically problematic to wish to be in full control over them, since it is ethically problematic to want to be in complete control over a moral person. But even before that, it might still be morally problematic to want to be in complete control over certain AI systems; it might be problematic if they are designed to look and behave like human beings. There can be, I have suggested, something symbolically problematic about wanting to be in complete control over an entity that symbolizes or represents something—viz. a human being—that it would be morally wrong and in itself bad to try to completely control.

For these reasons, I suggest that it will usually be a better idea to try to develop AI systems that can sensibly be interpreted as extensions of our own agency while avoiding developing robots that can be, imitate, or represent moral persons. One might ask, though, whether the two possibilities can ever come together, so to speak.

Think, for example, of the robotic copy that the Japanese robotics researcher Hiroshi Ishiguro has created of himself. It is an interesting question whether the agency of this robot could be seen as an extension of Ishiguro’s agency. The robot certainly represents or symbolizes Ishiguro. So, if he has control over this robot, then perhaps this can be seen as a form of extended agency and extended self-control. While it might seem symbolically problematic if Ishiguro wants to have complete control over the robot Erica that he has created, which looks like a human woman, it might not be problematic in the same way if he wants to have complete control over the robotic replica that he has created of himself. At least it would be different in terms of what it can be taken to symbolize or represent.

Thursday, December 29, 2022

Parents’ Political Ideology Predicts How Their Children Punish

Leshin, R. A., Yudkin, D. A., Van Bavel, J. J., 
Kunkel, L., & Rhodes, M. (2022). 
Psychological Science
https://doi.org/10.1177/09567976221117154

Abstract

From an early age, children are willing to pay a personal cost to punish others for violations that do not affect them directly. Various motivations underlie such “costly punishment”: People may punish to enforce cooperative norms (amplifying punishment of in-groups) or to express anger at perpetrators (amplifying punishment of out-groups). Thus, group-related values and attitudes (e.g., how much one values fairness or feels out-group hostility) likely shape the development of group-related punishment. The present experiments (N = 269, ages 3−8 from across the United States) tested whether children’s punishment varies according to their parents’ political ideology—a possible proxy for the value systems transmitted to children intergenerationally. As hypothesized, parents’ self-reported political ideology predicted variation in the punishment behavior of their children. Specifically, parental conservatism was associated with children’s punishment of out-group members, and parental liberalism was associated with children’s punishment of in-group members. These findings demonstrate how differences in group-related ideologies shape punishment across generations.

Conclusion

The present findings suggest that political ideology shapes punishment across development. Counter to previous findings among adults (King & Maruga, 2009), parental conservatism (vs. liberalism) was not related to increased punishment overall. And counter to previous developmental research on belief transmission (Gelman et al., 2004), our patterns did not strengthen with age. Rather, we found that across development, the link between ideology and punishment hinged on group membership. Parental conservatism was associated with children’s punishment of out-groups, whereas parental liberalism was associated with children’s punishment of in-groups. Our findings add rich insights to our understanding of how costly punishment functions in group contexts and provide new evidence of the powerful transmission of belief systems across generations.

Wednesday, December 28, 2022

Physician-assisted suicide is not protected by Massachusetts Constitution, top state court rules

Chris Van Buskirk
masslive.com
Originally posted 6 Dec 22

The state’s highest court ruled Monday morning that the Massachusetts state constitution does not protect physician-assisted suicide and that laws around manslaughter may prohibit the practice.

The decision affects whether doctors can prescribe lethal amounts of medication to terminally ill patients that would end their life. The plaintiffs, a doctor looking to provide physician-assisted suicide and a patient with an incurable cancer, argued that patients with six months or less to live have a constitutional right to bring about their death on their own terms.

But defendants in the case have said that the decision to legalize or formalize the procedure here in Massachusetts is a question best left to state lawmakers, not the courts. And in an 89-page ruling, Associate Justice Frank Gaziano wrote that the Supreme Judicial Court agreed with that position.

The court, he wrote, recognized the “paramount importance and profound significance of all end-of-life decisions” but that the Massachusetts Declaration of Rights does not reach so far as to protect physician-assisted suicide.

“Our decision today does not diminish the critical nature of these interests, but rather recognizes the limits of our Constitution, and the proper role of the judiciary in a functioning democracy. The desirability and practicality of physician-assisted suicide raises not only weighty philosophical questions about the nature of life and death, but also difficult technical questions about the regulation of the medical field,” Gaziano wrote. “These questions are best left to the democratic process, where their resolution can be informed by robust public debate and thoughtful research by experts in the field.”

Plaintiff Roger Kligler, a retired physician, was diagnosed with stage four metastatic prostate cancer, and in May 2018, a doctor told him that there was a fifty percent chance that he would die within five years.

Kligler, Gaziano wrote in the ruling, had not yet received a six-month prognosis, and his cancer “currently has been contained, and his physician asserts that it would not be surprising if Kligler were alive ten years from now.”

Tuesday, December 27, 2022

Are Illiberal Acts Unethical? APA’s Ethics Code and the Protection of Free Speech

O'Donohue, W., & Fisher, J. E. (2022). 
American Psychologist, 77(8), 875–886.
https://doi.org/10.1037/amp0000995

Abstract

The American Psychological Association’s (APA’s) Ethical Principles of Psychologists and Code of Conduct (American Psychological Association, 2017b; hereinafter referred to as the Ethics Code) does not contain an enforceable standard regarding psychologists’ role in either honoring or protecting the free speech of others, or ensuring that their own free speech is protected, including an important corollary of free speech, the protection of academic freedom. Illiberal acts illegitimately restrict civil liberties. We argue that the ethics of illiberal acts have not been adequately scrutinized in the Ethics Code. Psychologists require free speech to properly enact their roles as scientists as well as professionals who wish to advocate for their clients and students to enhance social justice. This article delineates criteria for what ought to be included in the Ethics Code, argues that ethical issues regarding the protection of free speech rights meet these criteria, and proposes language to be added to the Ethics Code.

Impact Statement

Freedom of speech is a fundamental civil right and currently has come under threat. Psychologists can only perform their duties as scientists, educators, or practitioners if they are not censored or fear censorship. The American Psychological Association’s (APA’s) Ethics Code contains no enforceable ethical standard to protect freedom of speech for psychologists. This article examines the ethics of free speech and argues for amending the APA Ethics Code to more clearly delineate psychologists’ rights and duties regarding free speech. This article argues that such protection is an ethical matter and for specific language to be included in the Ethics Code.

Conclusions

Free speech is central not only within the political sphere but also for the proper functioning of scholars and educators. Unfortunately, the ethics of free speech are not properly explicated in the current version of the American Psychological Association’s Ethics Code and this is particularly concerning given data that indicate a waning appreciation and protection of free speech in a variety of contexts. This article argues for fulsome protection of free speech rights by the inclusion of a clear and well-articulated statement in the Ethics Code of the psychologist’s duties related to free speech. Psychologists are committed to social justice and there can be no social justice without free speech.