Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Fairness. Show all posts
Showing posts with label Fairness. Show all posts

Saturday, March 2, 2024

Unraveling the Mindset of Victimhood

Scott Barry Kaufman
Scientific American
Originally posted 29 June 2020

Here is an excerpt:

Constantly seeking recognition of one’s victimhood. Those who score high on this dimension have a perpetual need to have their suffering acknowledged. In general, this is a normal psychological response to trauma. Experiencing trauma tends to “shatter our assumptions” about the world as a just and moral place. Recognition of one’s victimhood is a normal response to trauma and can help reestablish a person’s confidence in their perception of the world as a fair and just place to live.

Also, it is normal for victims to want the perpetrators to take responsibility for their wrongdoing and to express feelings of guilt. Studies conducted on testimonies of patients and therapists have found that validation of the trauma is important for therapeutic recovery from trauma and victimization (see here and here).

A sense of moral elitism. Those who score high on this dimension perceive themselves as having an immaculate morality and view everyone else as being immoral. Moral elitism can be used to control others by accusing others of being immoral, unfair or selfish, while seeing oneself as supremely moral and ethical.

Moral elitism often develops as a defense mechanism against deeply painful emotions and as a way to maintain a positive self-image. As a result, those under distress tend to deny their own aggressiveness and destructive impulses and project them onto others. The “other” is perceived as threatening whereas the self is perceived as persecuted, vulnerable and morally superior.


Here is a summary:

Kaufman explores the concept of "interpersonal victimhood," a tendency to view oneself as the repeated target of unfair treatment by others. He identifies several key characteristics of this mindset, including:
  • Belief in inherent unfairness: The conviction that the world is fundamentally unjust and that one is disproportionately likely to experience harm.
  • Moral self-righteousness: The perception of oneself as more ethical and deserving of good treatment compared to others.
  • Rumination on past injustices: Dwelling on and replaying negative experiences, often with feelings of anger and resentment.
  • Difficulty taking responsibility: Attributing negative outcomes to external factors rather than acknowledging one's own role.
Kaufman argues that while acknowledging genuine injustices is important, clinging to a victimhood identity can be detrimental. It can hinder personal growth, strain relationships, and fuel negativity. He emphasizes the importance of developing a more balanced perspective, acknowledging both external challenges and personal agency. The article offers strategies for fostering resilience

Thursday, November 2, 2023

Doesn't everybody jaywalk? On codified rules that are seldom followed and selectively punished

Wylie, J., & Gantman, A. (2023).
Cognition, 231, 105323.
https://doi.org/10.1016/j.cognition.2022.105323
Abstract

Rules are meant to apply equally to all within their jurisdiction. However, some rules are frequently broken without consequence for most. These rules are only occasionally enforced, often at the discretion of a third-party observer. We propose that these rules—whose violations are frequent, and enforcement is rare—constitute a unique subclass of explicitly codified rules, which we call ‘phantom rules’ (e.g., proscribing jaywalking). Their apparent punishability is ambiguous and particularly susceptible to third-party motives. Across six experiments, (N = 1440) we validated the existence of phantom rules and found evidence for their motivated enforcement. First, people played a modified Dictator Game with a novel frequently broken and rarely enforced rule (i.e., a phantom rule). People enforced this rule more often when the “dictator” was selfish (vs. fair) even though the rule only proscribed fractional offers (not selfishness). Then we turned to third person judgments of the U.S. legal system. We found these violations are recognizable to participants as both illegal and commonplace (Experiment 2), differentiable from violations of prototypical laws (Experiments 3) and enforced in a motivated way (Experiments 4a and 4b). Phantom rule violations (but not prototypical legal violations) are seen as more justifiably punished when the rule violator has also violated a social norm (vs. rule violation alone)—unless the motivation to punish has been satiated (Experiment 5). Phantom rules are frequently broken, codified rules. Consequently, their apparent punishability is ambiguous, and their enforcement is particularly susceptible to third party motives.


Here's my quick summary: 

This research explores the concept of "phantom rules". Phantom rules are rules that are frequently broken without consequence for most, and are only occasionally enforced, often at the discretion of a third-party observer. Examples of phantom rules include jaywalking, speeding, and not coming to a complete stop at a stop sign.

The authors argue that phantom rules are a unique subclass of explicitly codified rules, and that they have a number of important implications for our understanding of law and society. For example, phantom rules can lead to people feeling like the law is unfair and that they are being targeted. They can also create a sense of lawlessness and disorder.

The authors conducted six experiments to investigate the psychological and social dynamics of phantom rules. They found evidence that people are more likely to punish violations of phantom rules when the violator has also violated a social norm. They also found that people are more likely to justify the selective enforcement of phantom rules when they believe that the violator is a deserving target.

The authors conclude by arguing that phantom rules are a significant social phenomenon with a number of important implications for law and society. They call for more research on the psychological and social dynamics of phantom rules, and on the impact of phantom rules on people's perceptions of the law and the criminal justice system.

Friday, October 20, 2023

Competition and moral behavior: A meta-analysis of forty-five crowd-sourced experimental designs

Huber, C., Dreber, A., et al. (2023).
PNAS of the United States of America, 120(23).

Abstract

Does competition affect moral behavior? This fundamental question has been debated among leading scholars for centuries, and more recently, it has been tested in experimental studies yielding a body of rather inconclusive empirical evidence. A potential source of ambivalent empirical results on the same hypothesis is design heterogeneity—variation in true effect sizes across various reasonable experimental research protocols. To provide further evidence on whether competition affects moral behavior and to examine whether the generalizability of a single experimental study is jeopardized by design heterogeneity, we invited independent research teams to contribute experimental designs to a crowd-sourced project. In a large-scale online data collection, 18,123 experimental participants were randomly allocated to 45 randomly selected experimental designs out of 95 submitted designs. We find a small adverse effect of competition on moral behavior in a meta-analysis of the pooled data. The crowd-sourced design of our study allows for a clean identification and estimation of the variation in effect sizes above and beyond what could be expected due to sampling variance. We find substantial design heterogeneity—estimated to be about 1.6 times as large as the average standard error of effect size estimates of the 45 research designs—indicating that the informativeness and generalizability of results based on a single experimental design are limited. Drawing strong conclusions about the underlying hypotheses in the presence of substantive design heterogeneity requires moving toward much larger data collections on various experimental designs testing the same hypothesis.

Significance

Using experiments involves leeway in choosing one out of many possible experimental designs. This choice constitutes a source of uncertainty in estimating the underlying effect size which is not incorporated into common research practices. This study presents the results of a crowd-sourced project in which 45 independent teams implemented research designs to address the same research question: Does competition affect moral behavior? We find a small adverse effect of competition on moral behavior in a meta-analysis involving 18,123 experimental participants. Importantly, however, the variation in effect size estimates across the 45 designs is substantially larger than the variation expected due to sampling errors. This “design heterogeneity” highlights that the generalizability and informativeness of individual experimental designs are limited.

Here are some of the key takeaways from the research:
  • Competition can have a small, but significant, negative effect on moral behavior.
  • This effect is likely due to the fact that competition can lead to people being more self-interested and less concerned about the well-being of others.
  • The findings of this research have important implications for our understanding of how competition affects moral behavior.

Tuesday, October 17, 2023

Tackling healthcare AI's bias, regulatory and inventorship challenges

Bill Siwicki
Healthcare IT News
Originally posted 29 August 23

While AI adoption is increasing in healthcare, there are privacy and content risks that come with technology advancements.

Healthcare organizations, according to Dr. Terri Shieh-Newton, an immunologist and a member at global law firm Mintz, must have an approach to AI that best positions themselves for growth, including managing:
  • Biases introduced by AI. Provider organizations must be mindful of how machine learning is integrating racial diversity, gender and genetics into practice to support the best outcome for patients.
  • Inventorship claims on intellectual property. Identifying ownership of IP as AI begins to develop solutions in a faster, smarter way compared to humans.
Healthcare IT News sat down with Shieh-Newton to discuss these issues, as well as the regulatory landscape’s response to data and how that impacts AI.

Q. Please describe the generative AI challenge with biases introduced from AI itself. How is machine learning integrating racial diversity, gender and genetics into practice?
A. Generative AI is a type of machine learning that can create new content based on the training of existing data. But what happens when that training set comes from data that has inherent bias? Biases can appear in many forms within AI, starting from the training set of data.

Take, as an example, a training set of patient samples already biased if the samples are collected from a non-diverse population. If this training set is used for discovering a new drug, then the outcome of the generative AI model can be a drug that works only in a subset of a population – or have just a partial functionality.

Some traits of novel drugs are better binding to its target and lower toxicity. If the training set excludes a population of patients of a certain gender or race (and the genetic differences that are inherent therein), then the outcome of proposed drug compounds is not as robust as when the training sets include a diversity of data.

This leads into questions of ethics and policies, where the most marginalized population of patients who need the most help could be the group that is excluded from the solution because they were not included in the underlying data used by the generative AI model to discover that new drug.

One can address this issue with more deliberate curation of the training databases. For example, is the patient population inclusive of many types of racial backgrounds? Gender? Age ranges?

By making sure there is a reasonable representation of gender, race and genetics included in the initial training set, generative AI models can accelerate drug discovery, for example, in a way that benefits most of the population.

The info is here. 

Here is my take:

 One of the biggest challenges is bias. AI systems are trained on data, and if that data is biased, the AI system will be biased as well. This can have serious consequences in healthcare, where biased AI systems could lead to patients receiving different levels of care or being denied care altogether.

Another challenge is regulation. Healthcare is a highly regulated industry, and AI systems need to comply with a variety of laws and regulations. This can be complex and time-consuming, and it can be difficult for healthcare organizations to keep up with the latest changes.

Finally, the article discusses the challenges of inventorship. As AI systems become more sophisticated, it can be difficult to determine who is the inventor of a new AI-powered healthcare solution. This can lead to disputes and delays in bringing new products and services to market.

The article concludes by offering some suggestions for how to address these challenges:
  • To reduce bias, healthcare organizations need to be mindful of the data they are using to train their AI systems. They should also audit their AI systems regularly to identify and address any bias.
  • To comply with regulations, healthcare organizations need to work with experts to ensure that their AI systems meet all applicable requirements.
  • To resolve inventorship disputes, healthcare organizations should develop clear policies and procedures for allocating intellectual property rights.
By addressing these challenges, healthcare organizations can ensure that AI is deployed in a way that is safe, effective, and ethical.

Additional thoughts

In addition to the challenges discussed in the article, there are a number of other factors that need to be considered when deploying AI in healthcare. For example, it is important to ensure that AI systems are transparent and accountable. This means that healthcare organizations should be able to explain how their AI systems work and why they make the decisions they do.

It is also important to ensure that AI systems are fair and equitable. This means that they should treat all patients equally, regardless of their race, ethnicity, gender, income, or other factors.

Finally, it is important to ensure that AI systems are used in a way that respects patient privacy and confidentiality. This means that healthcare organizations should have clear policies in place for the collection, use, and storage of patient data.

By carefully considering all of these factors, healthcare organizations can ensure that AI is used to improve patient care and outcomes in a responsible and ethical way.

Tuesday, August 29, 2023

Yale University settles lawsuit alleging it pressured students with mental health issues to withdraw

Associated Press
Originally posted 25 Aug 23

Yale University and a student group announced Friday that they've reached a settlement in a federal lawsuit that accused the Ivy League school of discriminating against students with mental health disabilities, including pressuring them to withdraw.

Under the agreement, Yale will modify its policies regarding medical leaves of absence, including streamlining the reinstatement process for students who return to campus. The student group, which also represents alumni, had argued the process was onerous, discouraging students for decades from taking medical leave when they needed it most.

The settlement is a “watershed moment” for the university and mental health patients, said 2019 graduate Rishi Mirchandani, a co-founder of Elis for Rachael, the group that sued. It was formed to help students with mental health issues in honor of a Yale student who took her own life.

“This historic settlement affirms that students with mental health needs truly belong," Mirchandani said.

A joint statement from Elis for Rachael and Yale, released on Friday, confirmed the agreement "to resolve a lawsuit filed last November in federal district court related to policies and practices impacting students with mental health disabilities.”

Under the agreement, Yale will allow students to study part-time if they have urgent medical needs. Elis for Rachael said it marks the first time the university has offered such an option. Students granted the accommodation at the beginning of a new term will receive a 50% reduction in tuition.

“Although Yale describes the circumstances for this accommodation as ‘rare,’ this change still represents a consequential departure from the traditional all-or-nothing attitude towards participation in academic life at Yale,” the group said in a statement.

The dean of Yale College, Pericles Lewis, said he was “pleased with today’s outcome.”


The potential good news: The lawsuit against Yale is a step towards ensuring that students with mental health disabilities have the same opportunities as other students. It is also a reminder that colleges and universities have a responsibility to create a supportive environment for all students, regardless of their mental health status.

Sunday, July 23, 2023

How to Use AI Ethically for Ethical Decision-Making

Demaree-Cotton, J., Earp, B. D., & Savulescu, J.
(2022). American Journal of Bioethics, 22(7), 1–3.

Here is an excerpt:

The  kind  of AI  proposed  by  Meier  and  colleagues  (2022) has  the  fascinating  potential to improve the transparency of ethical decision-making, at least if it is used as a decision aid rather than a decision replacement (Savulescu & Maslen 2015). While artificial intelligence cannot itself engage in the human communicative process of justifying its decisions to patients, the AI they describe (unlike “black-box” AI) makes explicit which values and principles are involved and how much weight they are given.

By contrast, the moral principles or values underlying human moral intuition are not always consciously, introspectively accessible (Cushman, Young, and  Hauser  2006).  While humans sometimes have a fuzzy, intuitive sense of some of the factors that are relevant to their moral judgment, we often have strong moral intuitions without being sure of their source,  or  with- out being clear on precisely how strongly different  factors  played  a  role in  generating  the intuitions.  But  if  clinicians  make use  of  the AI  as a  decision  aid, this  could help  them  to transparently and precisely communicate the actual reasons behind their decision.

This is so even if the AI’s recommendation is ultimately rejected. Suppose, for example, that the AI recommends a course of action, with a certain amount of confidence, and it specifies the exact values or  weights it has assigned  to  autonomy versus  beneficence  in coming  to  this conclusion. Evaluating the recommendation made by the AI could help a committee make more explicit the “black box” aspects  of their own reasoning.  For example, the committee might decide that beneficence should actually be  weighted more heavily in this case than the AI suggests. Being able to understand the reason that their decision diverges from that of the AI gives them the opportunity to offer a further justifying reason as to why they think beneficence should be given more weight;  and  this,  in  turn, could improve the  transparency of their recommendation. 

However, the potential for the kind of AI described in the target article to improve the accuracy of moral decision-making may be more limited. This is so for two reasons. Firstly, whether AI can be expected to outperform human decision-making depends in part on the metrics used to train it. In non-ethical domains, superior accuracy can be achieved because the “verdicts” given to the AI in the training phase are not solely the human judgments that the AI is intended to replace or inform. Consider how AI can learn to detect lung cancer from scans at a superior rate to human radiologists after being trained on large datasets and being “told” which scans show cancer and which ones are cancer-free. Importantly, this training includes cases where radiologists  did  not  recognize  cancer  in  the  early  scans  themselves,  but  where  further information verified the correct diagnosis later on (Ardila et al. 2019). Consequently, these AIs are  able  to  detect  patterns  even in  early  scans  that  are  not  apparent  or  easily detectable  by human radiologists, leading to superior accuracy compared to human performance.  

Tuesday, June 13, 2023

Using the Veil of Ignorance to align AI systems with principles of justice

Weidinger, L. McKee, K.R., et al. (2023).
PNAS, 120(18), e2213709120

Abstract

The philosopher John Rawls proposed the Veil of Ignorance (VoI) as a thought experiment to identify fair principles for governing a society. Here, we apply the VoI to an important governance domain: artificial intelligence (AI). In five incentive-compatible studies (N = 2, 508), including two preregistered protocols, participants choose principles to govern an Artificial Intelligence (AI) assistant from behind the veil: that is, without knowledge of their own relative position in the group. Compared to participants who have this information, we find a consistent preference for a principle that instructs the AI assistant to prioritize the worst-off. Neither risk attitudes nor political preferences adequately explain these choices. Instead, they appear to be driven by elevated concerns about fairness: Without prompting, participants who reason behind the VoI more frequently explain their choice in terms of fairness, compared to those in the Control condition. Moreover, we find initial support for the ability of the VoI to elicit more robust preferences: In the studies presented here, the VoI increases the likelihood of participants continuing to endorse their initial choice in a subsequent round where they know how they will be affected by the AI intervention and have a self-interested motivation to change their mind. These results emerge in both a descriptive and an immersive game. Our findings suggest that the VoI may be a suitable mechanism for selecting distributive principles to govern AI.

Significance

The growing integration of Artificial Intelligence (AI) into society raises a critical question: How can principles be fairly selected to govern these systems? Across five studies, with a total of 2,508 participants, we use the Veil of Ignorance to select principles to align AI systems. Compared to participants who know their position, participants behind the veil more frequently choose, and endorse upon reflection, principles for AI that prioritize the worst-off. This pattern is driven by increased consideration of fairness, rather than by political orientation or attitudes to risk. Our findings suggest that the Veil of Ignorance may be a suitable process for selecting principles to govern real-world applications of AI.

From the Discussion section

What do these findings tell us about the selection of principles for AI in the real world? First, the effects we observe suggest that—even though the VoI was initially proposed as a mechanism to identify principles of justice to govern society—it can be meaningfully applied to the selection of governance principles for AI. Previous studies applied the VoI to the state, such that our results provide an extension of prior findings to the domain of AI. Second, the VoI mechanism demonstrates many of the qualities that we want from a real-world alignment procedure: It is an impartial process that recruits fairness-based reasoning rather than self-serving preferences. It also leads to choices that people continue to endorse across different contexts even where they face a self-interested motivation to change their mind. This is both functionally valuable in that aligning AI to stable preferences requires less frequent updating as preferences change, and morally significant, insofar as we judge stable reflectively endorsed preferences to be more authoritative than their nonreflectively endorsed counterparts. Third, neither principle choice nor subsequent endorsement appear to be particularly affected by political affiliation—indicating that the VoI may be a mechanism to reach agreement even between people with different political beliefs. Lastly, these findings provide some guidance about what the content of principles for AI, selected from behind a VoI, may look like: When situated behind the VoI, the majority of participants instructed the AI assistant to help those who were least advantaged.

Wednesday, May 3, 2023

Advocates of high court reform give Roberts poor marks

Kelsey Reichmann
Courthouse News Service
Originally published 27 April 23

The final straw for ethics experts wondering if the leader of one of the nation’s most powerful bodies would uphold the institutionalist views associated with his image came on Tuesday as Chief Justice John Roberts declined to testify before Congress about ethical concerns at the Supreme Court. 

“You can't actually have checks and balances if one branch is so powerful that the other branches cannot, in fact, engage in their constitutionally mandated role to provide a check on inappropriate or illegal behavior,” Caroline Fredrickson, a distinguished visitor from practice at Georgetown Law, said in a phone interview. “Then we have a defective system.” 

Roberts cited concerns about separation of powers as the basis for declining to testify before the Senate Judiciary Committee on the court’s ethical standards — or lack thereof. Fredrickson said it was a canard that a system based on checks and balances would not be able to do just that. 

“It sort of puts the question to the entire structure of separation of powers and checks and balances,” Fredrickson said. 

For the past several weeks, one of the associate justices has been at the heart of controversy. After blockbuster reporting revealed that Republican megadonor Harlan Crow has footed the bill for decades of luxury vacations enjoyed by Justice Clarence Thomas, the revelations brought scrutiny on the disclosure laws that bind the justices and it called into question why the justices are not bound by ethics standards like the rest of the judiciary and other branches of government.

“For it to function, it relies on the public trust, and the trust of the other institutions to abide by the court's findings,” Virginia Canter, chief ethics counsel at Citizens for Responsibility and Ethics in Washington, said in a phone call. “If the court and its members are willing to live without any standards, then I think that ultimately the whole process and the institution start to unravel.” 

Many court watchers saw opportunity for action here on a call that has been made for years: the adoption of an ethics code.

“The idea that the Supreme Court would continue to operate without one, it's just ridiculous,” Gabe Roth, executive director of Fix the Court, said in a phone call. 

Along with his letter declining to testify before Congress on the court’s ethics, Roberts included a statement listing principles and practices the court “subscribes” to. The statement was signed by all nine justices. 

For ethics experts raising alarm bells on this subject, a restatement of guidelines that the justices are already supposed to follow did not meet the moment.

“It's just a random — in my view at least — conglomeration of paragraphs that rehash things you already knew, but, yeah, good for him for getting all nine justices on board with something that already exists,” Roth said. 

Monday, May 1, 2023

Take your ethics and shove it! Narcissists' angry responses to ethical leadership

Fox, F. R., Smith, M. B., & Webster, B. D. (2023). 
Personality and Individual Differences, 204, 112032.
https://doi.org/10.1016/j.paid.2022.112032

Abstract

Evoking the agentic model of narcissism, the present study contributes to understanding the nuanced responses to ethical leadership that result from the non-normative, dark personality trait of narcissism. We draw from affective events theory to understand why narcissists respond to ethical leadership with feelings of anger, which then results in withdrawal behaviors. We establish internal validity by testing our model via an experimental design. Next, we establish external validity by testing our theoretical model in a field study of university employees. Together, results from the studies suggest anger mediates the positive relationship between narcissism and withdrawal under conditions of high ethical leadership. We discuss the theoretical and practical implications of our findings.

From the Introduction:

Ethical leaders model socially acceptable behavior that is prosocial in nature while matching an individual moral-compass with the good of the group (Brown et al., 2005). Ethical leadership is defined as exalting the moral person (i.e., being an ethical example, fair treatment) and the moral manager (i.e., encourage normative behavior, discourage unethical behavior), and has been shown to be related to several beneficial organizational outcomes (Den Hartog, 2015; Mayer et al., 2012). The construct of ethical leadership is not only based on moral/ethical principles, but overtly promoting normative communally beneficial ideals and establishing guidelines for acceptable behavior (Bedi et al., 2016; Brown et al., 2005). Ethical leaders cultivate a reputation founded upon doing the right thing, treating others fairly, and thinking about the common good.

As a contextual factor, ethical leadership presents a situation where employees are presented with expectations and clear standards for normative behavior. Indeed, ethical leaders, by their behavior, convey what behavior is expected, rewarded, and punished (Brown et al., 2005). In other words, ethical leaders set the standard for behavior in the organization and are effective at establishing fair and transparent processes for rewarding performance. Consequently, ethical leadership has been shown to be positively related to task performance and citizenship behavior and negatively related to deviant behaviors (Peng & Kim, 2020).


This research examines how narcissistic individuals respond to ethical leadership, which is characterized by fairness, transparency, and concern for the well-being of employees. The study found that narcissistic individuals are more likely to respond with anger and hostility to ethical leadership compared to non-narcissistic individuals. The researchers suggest that this may be due to the fact that narcissists prioritize their own self-interests and are less concerned with the well-being of others. Ethical leadership, which promotes the well-being of employees, may therefore be perceived as a threat to their self-interests, leading to a negative response.

The study also found that when narcissists were in a leadership position, they were less likely to engage in ethical leadership behaviors themselves. This suggests that narcissistic individuals may not only be resistant to ethical leadership but may also be less likely to exhibit these behaviors themselves. The findings of this research have important implications for organizations and their leaders, as they highlight the challenges of promoting ethical leadership in the presence of narcissistic individuals.

Thursday, March 23, 2023

Are there really so many moral emotions? Carving morality at its functional joints

Fitouchi L., André J., & Baumard N.
To appear in L. Al-Shawaf & T. K. Shackelford (Eds.)
The Oxford Handbook of Evolution and the Emotions.
New York: Oxford University Press.

Abstract

In recent decades, a large body of work has highlighted the importance of emotional processes in moral cognition. Since then, a heterogeneous bundle of emotions as varied as anger, guilt, shame, contempt, empathy, gratitude, and disgust have been proposed to play an essential role in moral psychology.  However, the inclusion of these emotions in the moral domain often lacks a clear functional rationale, generating conflations between merely social and properly moral emotions. Here, we build on (i) evolutionary theories of morality as an adaptation for attracting others’ cooperative investments, and on (ii) specifications of the distinctive form and content of moral cognitive representations. On this basis, we argue that only indignation (“moral anger”) and guilt can be rigorously characterized as moral emotions, operating on distinctively moral representations. Indignation functions to reclaim benefits to which one is morally entitled, without exceeding the limits of justice. Guilt functions to motivate individuals to compensate their violations of moral contracts. By contrast, other proposed moral emotions (e.g. empathy, shame, disgust) appear only superficially associated with moral cognitive contents and adaptive challenges. Shame doesn’t track, by design, the respect of moral obligations, but rather social valuation, the two being not necessarily aligned. Empathy functions to motivate prosocial behavior between interdependent individuals, independently of, and sometimes even in contradiction with the prescriptions of moral intuitions. While disgust is often hypothesized to have acquired a moral role beyond its pathogen-avoidance function, we argue that both evolutionary rationales and psychological evidence for this claim remain inconclusive for now.

Conclusion

In this chapter, we have suggested that a specification of the form and function of moral representations leads to a clearer picture of moral emotions. In particular, it enables a principled distinction between moral and non-moral emotions, based on the particular types of cognitive representations they process. Moral representations have a specific content: they represent a precise quantity of benefits that cooperative partners owe each other, a legitimate allocation of costs and benefits that ought to be, irrespective of whether it is achieved by people’s actual behaviors. Humans intuit that they have a duty not to betray their coalition, that innocent people do not deserve to be harmed, that their partner has a right not to be cheated on. Moral emotions can thus be defined as superordinate programs orchestrating cognition, physiology and behavior in accordance with the specific information encoded in these moral representations.    On this basis, indignation and guilt appear as prototypical moral emotions. Indignation (“moral anger”) is activated when one receives fewer benefits than one deserves, and recruits bargaining mechanisms to enforce the violated moral contract. Guilt, symmetrically, is sensitive to one’s failure to honor one’s obligations toward others, and motivates compensation to provide them the missing benefits they deserve. By contrast, often-proposed “moral” emotions – shame, empathy, disgust – seem not to function to compute distinctively moral representations of cooperative obligations, but serve other, non-moral functions – social status management, interdependence, and pathogen avoidance (Figure 2). 

Monday, February 20, 2023

Definition drives design: Disability models and mechanisms of bias in AI technologies

Newman-Griffis, D., et al. (2023).
First Monday, 28(1).
https://doi.org/10.5210/fm.v28i1.12903

Abstract

The increasing deployment of artificial intelligence (AI) tools to inform decision-making across diverse areas including healthcare, employment, social benefits, and government policy, presents a serious risk for disabled people, who have been shown to face bias in AI implementations. While there has been significant work on analysing and mitigating algorithmic bias, the broader mechanisms of how bias emerges in AI applications are not well understood, hampering efforts to address bias where it begins. In this article, we illustrate how bias in AI-assisted decision-making can arise from a range of specific design decisions, each of which may seem self-contained and non-biasing when considered separately. These design decisions include basic problem formulation, the data chosen for analysis, the use the AI technology is put to, and operational design elements in addition to the core algorithmic design. We draw on three historical models of disability common to different decision-making settings to demonstrate how differences in the definition of disability can lead to highly distinct decisions on each of these aspects of design, leading in turn to AI technologies with a variety of biases and downstream effects. We further show that the potential harms arising from inappropriate definitions of disability in fundamental design stages are further amplified by a lack of transparency and disabled participation throughout the AI design process. Our analysis provides a framework for critically examining AI technologies in decision-making contexts and guiding the development of a design praxis for disability-related AI analytics. We put forth this article to provide key questions to facilitate disability-led design and participatory development to produce more fair and equitable AI technologies in disability-related contexts.

Conclusion

The proliferation of artificial intelligence (AI) technologies as behind the scenes tools to support decision-making processes presents significant risks of harm for disabled people. The unspoken assumptions and unquestioned preconceptions that inform AI technology development can serve as mechanisms of bias, building the base problem formulation that guides a technology on reductive and harmful conceptualisations of disability. As we have shown, even when developing AI technologies to address the same overall goal, different definitions of disability can yield highly distinct analytic technologies that reflect contrasting, frequently incompatible decisions in the information to analyse, what analytic process to use, and what the end product of analysis will be. Here we have presented an initial framework to support critical examination of specific design elements in the formulation of AI technologies for data analytics, as a tool to examine the definitions of disability used in their design and the resulting impacts on the technology. We drew on three important historical models of disability that form common foundations for policy, practice, and personal experience today—the medical, social, and relational models—and two use cases in healthcare and government benefits to illustrate how different ways of conceiving of disability can yield technologies that contrast and conflict with one another, creating distinct risks for harm.

Friday, January 27, 2023

Moral foundations, values, and judgments in extraordinary altruists

Amormino, P., Ploe, M.L. & Marsh, A.A.
Sci Rep 12, 22111 (2022).
https://doi.org/10.1038/s41598-022-26418-1

Abstract

Donating a kidney to a stranger is a rare act of extraordinary altruism that appears to reflect a moral commitment to helping others. Yet little is known about patterns of moral cognition associated with extraordinary altruism. In this preregistered study, we compared the moral foundations, values, and patterns of utilitarian moral judgments in altruistic kidney donors (n = 61) and demographically matched controls (n = 58). Altruists expressed more concern only about the moral foundation of harm, but no other moral foundations. Consistent with this, altruists endorsed utilitarian concerns related to impartial beneficence, but not instrumental harm. Contrary to our predictions, we did not find group differences between altruists and controls in basic values. Extraordinary altruism generally reflected opposite patterns of moral cognition as those seen in individuals with psychopathy, a personality construct characterized by callousness and insensitivity to harm and suffering. Results link real-world, costly, impartial altruism primarily to moral cognitions related to alleviating harm and suffering in others rather than to basic values, fairness concerns, or strict utilitarian decision-making.

Discussion

In the first exploration of patterns of moral cognition that characterize individuals who have engaged in real-world extraordinary altruism, we found that extraordinary altruists are distinguished from other people only with respect to a narrow set of moral concerns: they are more concerned with the moral foundation of harm/care, and they more strongly endorse impartial beneficence. Together, these findings support the conclusion that extraordinary altruists are morally motivated by an impartial concern for relieving suffering, and in turn, are motivated to improve others’ welfare in a self-sacrificial manner that does not allow for the harm of others in the process. These results are also partially consistent with extraordinary altruism representing the inverse of psychopathy in terms of moral cognition: altruists score lower in psychopathy (with the strongest relationships observed for psychopathy subscales associated with socio-affective responding) and higher-psychopathy participants most reliably endorse harm/care less than lower psychopathy participants, with participants with higher scores on the socio-affective subscales of our psychopathy measures also endorsing impartial beneficence less strongly.

(cut)

Notably, and contrary to our predictions, we did not find that donating a kidney to a stranger is strongly or consistently correlated (positively or negatively) with basic values like universalism, benevolence, power, hedonism, or conformity. That suggests extraordinary altruism may not be driven by unusual values, at least as they are measured by the Schwartz inventory, but rather by specific moral concerns (such as harm/care). Our findings suggest that reported values may not in themselves predict whether one acts on those values when it comes to extraordinary altruism, much as “…a person can value being outgoing in social gatherings, independently of whether they are prone to acting in a lively or sociable manner”. Similarly, people who share a common culture may value common things but acting on those values to an extraordinarily costly and altruistic degree may require a stronger motivation––a moral motivation.

Friday, January 13, 2023

How Much (More) Should CEOs Make? A Universal Desire for More Equal Pay

Kiatpongsan, S., & Norton, M. I. (2014).
Perspectives on Psychological Science, 9(6), 587–593.
https://doi.org/10.1177/1745691614549773

Abstract

Do people from different countries and different backgrounds have similar preferences for how much more the rich should earn than the poor? Using survey data from 40 countries (N = 55,238), we compare respondents’ estimates of the wages of people in different occupations—chief executive officers, cabinet ministers, and unskilled workers—to their ideals for what those wages should be. We show that ideal pay gaps between skilled and unskilled workers are significantly smaller than estimated pay gaps and that there is consensus across countries, socioeconomic status, and political beliefs. Moreover, data from 16 countries reveals that people dramatically underestimate actual pay inequality. In the United States—where underestimation was particularly pronounced—the actual pay ratio of CEOs to unskilled workers (354:1) far exceeded the estimated ratio (30:1), which in turn far exceeded the ideal ratio (7:1). In sum, respondents underestimate actual pay gaps, and their ideal pay gaps are even further from reality than those underestimates.

Conclusion

These results demonstrate a strikingly consistent belief that the gaps in incomes between
skilled and unskilled workers should be smaller than people believe them to be – and much
smaller than these gaps actually are. The consensus that income gaps between skilled and
unskilled workers should be smaller holds in all subgroups of respondents regardless of their age,
education, socioeconomic status, political affiliation and opinions on inequality and pay. As a
result, they suggest that – in contrast to a belief that only the poor and members of left-wing
political parties desire greater income equality – people all over the world, and from all walks of
life, would prefer smaller pay gaps between the rich and poor.

Thursday, January 5, 2023

The Supreme Court Needs Real Oversight

Glen Fine
The Atlantic
Originally posted 5 DEC 22

Here is an excerpt:

The lack of ethical rules that bind the Court is the first problem—and the easier one to address. The Code of Conduct for United States Judges, promulgated by the federal courts’ Judicial Conference, “prescribes ethical norms for federal judges as a means to preserve the actual and apparent integrity of the federal judiciary.” The code covers judicial conduct both on and off the bench, including requirements that judges act at all times to promote public confidence in the integrity and impartiality of the judiciary. But this code applies only to lower-level federal judges, not to the Supreme Court, which has not issued ethical rules that apply to its own conduct. The Court should explicitly adopt this code or a modified one.

Chief Justice Roberts has noted that Supreme Court justices voluntarily consult the Code of Conduct and other ethical rules for guidance. He has also pointed out that the justices can seek ethical advice from a variety of sources, including the Court’s Legal Office, the Judicial Conference’s Committee on Codes of Conduct, and their colleagues. But this is voluntary, and each justice decides independently whether and how ethical rules apply in any particular case. No one—including the chief justice—has the ability to alter a justice’s self-judgment.

Oversight of the judiciary is a more difficult issue, involving separation-of-powers concerns. I was the inspector general of the Department of Justice for 11 years and the acting inspector general of the Department of Defense for four years; I saw the importance and challenges of oversight in two of the most important government agencies. I also experienced the difficulties in conducting complex investigations of alleged misconduct, including leak investigations. But as I wrote in a Brookings Institution article this past May after the Dobbs leak, the Supreme Court does not have the internal capacity to effectively investigate such leaks, and it would benefit from a skilled internal investigator, like an inspector general, to help oversee the Court and the judiciary.

Another example of the Court’s ineffective self-policing and lack of transparency involves its recusal decisions. For example, Justice Thomas’s wife, Virginia Thomas, has argued that the 2020 presidential election was stolen, sent text messages to former White House Chief of Staff Mark Meadows urging him and the White House to seek to overturn the election, and expressed support for the pro-Trump January 6 rally on the Ellipse. Nevertheless, Justice Thomas has not recused himself in cases relating to the subsequent attack on the Capitol.

Notably, Thomas was the only justice to dissent from the Court’s decision not to block the release to the January 6 committee of White House records related to the attack, which included his wife’s texts. Some legal experts have argued that this is a clear instance where recusal should have occurred. Statute 28 U.S.C. 455 requires federal judges, including Supreme Court justices, to recuse themselves from a case when they know that their spouse has any interest that could be substantially affected by the outcome. In addition, the statute requires justices and judges to disqualify themselves in any proceeding in which their impartiality may reasonably be questioned.

Tuesday, December 20, 2022

Doesn't everybody jaywalk? On codified rules that are seldom followed and selectively punished

Wylie, J., & Gantman, A.
Cognition, Volume 231, February 2023, 105323

Abstract

Rules are meant to apply equally to all within their jurisdiction. However, some rules are frequently broken without consequence for most. These rules are only occasionally enforced, often at the discretion of a third-party observer. We propose that these rules—whose violations are frequent, and enforcement is rare—constitute a unique subclass of explicitly codified rules, which we call ‘phantom rules’ (e.g., proscribing jaywalking). Their apparent punishability is ambiguous and particularly susceptible to third-party motives. Across six experiments, (N = 1,440) we validated the existence of phantom rules and found evidence for their motivated enforcement. First, people played a modified Dictator Game with a novel frequently broken and rarely enforced rule (i.e., a phantom rule). People enforced this rule more often when the “dictator” was selfish (vs. fair) even though the rule only proscribed fractional offers (not selfishness). Then we turned to third person judgments of the U.S. legal system. We found these violations are recognizable to participants as both illegal and commonplace (Experiment 2), differentiable from violations of prototypical laws (Experiments 3) and enforced in a motivated way (Experiments 4a and 4b). Phantom rule violations (but not prototypical legal violations) are seen as more justifiably punished when the rule violator has also violated a social norm (vs. rule violation alone)—unless the motivation to punish has been satiated (Experiment 5). Phantom rules are frequently broken, codified rules. Consequently, their apparent punishability is ambiguous, and their enforcement is particularly susceptible to third party motives.

General Discussion

In this paper, we identified a subset of rules, which are explicitly codified (e.g., in professional tennis, in an economic game, by the U.S. legal system), frequently violated, and rarely enforced. As a result, their apparent punishability is particularly ambiguous and subject to motivation. These rules show us that codified rules, which are meant to apply equally to all, can be used to sanction behaviors outside of their jurisdiction. We named this subclass of rules phantom rules and found evidence that people enforce them according to their desire to punish a different behavior (i.e., a social norm violation), recognize them in the U.S. legal system, and employ motivated reasoning to determine their punishability. We hypothesized and found, across behavioral and survey experiments, that phantom rules—rules where the descriptive norms of enforcement are low—seem enforceable, punishable, and legitimate only when one has an external active motivation to punish. Indeed, we found that phantom rules were judged to be more justifiably enforced and more morally wrong to violate when the person who broke the rule had also violated a social norm—unless they were also punished for that social norm violation. Together, we take this as evidence of the existence of phantom rules and the malleability of their apparent punishability via active (vs. satiated) punishment motivation.

The ambiguity of phantom rule enforcement makes it possible for them to serve a hidden function; they can be used to punish behavior outside of the purview of the official rules. Phantom rule violations are technically wrong, but on average, seen as less morally wrong.This means, for the most part, that people are unlikely to feel strongly when they see these rules violated, and indeed, people frequently violate phantom rules without consequence. This pattern fits well with previous work in experimental philosophy that shows that motivations can affect how we reason about what constitutes breaking a rule in the first place. For example, when rule breaking occurs blameless (e.g., unintentionally), people are less likely to say a rule was violated at all and look for reasons to excuse their behavior(Turri, 2019; Turri & Blouw, 2015). Indeed, our findings mirror this pattern. People find a reason to punish phantom rule violations only when people are particularly or dispositionally motivated to punish.

Monday, November 14, 2022

Your Land Acknowledgment Is Not Enough

Joseph Pierce
hyperallergic.com
Originally posted 12 OCT 22

Here is an excerpt:

Museums that once stole Indigenous bones now celebrate Indigenous Peoples’ Day. Organizations that have never hired an Indigenous person now admit the impact of Indigenous genocide through social media. Land-grant universities scramble to draft statements about their historical ties to fraudulent treaties and pilfered graves. Indeed, these are challenging times for institutions trying to do right by Indigenous peoples.

Some institutions will seek the input of an Indigenous scholar or perhaps a community. They will feel contented and “diverse” because of this input. They want a decolonial to-do list. But what we have are questions: What changes when an institution publishes a land acknowledgment? What material, tangible changes are enacted?

Without action, without structural change, acknowledging stolen land is what Eve Tuck and K. Wayne Yang call a “settler move to innocence.” Institutions are not innocent. Settlers are not innocent.

The problem with land acknowledgments is that they are almost never followed by meaningful action. Acknowledgment without action is an empty gesture, exculpatory and self-serving. What is more, such gestures shift the onus of action back onto Indigenous people, who neither asked for an apology nor have the ability to forgive on behalf of the land that has been stolen and desecrated. It is not my place to forgive on behalf of the land.

A land acknowledgment is not enough.

This is what settler institutions do not understand: Land does not require that you confirm it exists, but that you reciprocate the care it has given you. Land is not asking for acknowledgment. It is asking to be returned to itself. It is asking to be heard and cared for and attended to. It is asking to be free.

Land is not an object, not a thing. Land does not require recognition. It requires care. It requires presence.

Land is a gift, a relative, a body that sustains other bodies. And if the land is our relative, then we cannot simply acknowledge it as land. We must understand what our responsibilities are to the land as our kin. We must engage in a reciprocal relationship with the land. Land is — in its animate multiplicities — an ongoing enactment of reciprocity.

A land acknowledgment is not enough.

Wednesday, November 2, 2022

How the Classics Changed Research Ethics

Scott Sleek
Psychological Science
Originally posted 31 AUG 22

Here is an excerpt:

Social scientists have long contended that the Common Rule was largely designed to protect participants in biomedical experiments—where scientists face the risk of inducing physical harm on subjects—but fits poorly with the other disciplines that fall within its reach.

“It’s not like the IRBs are trying to hinder research. It’s just that regulations continue to be written in the medical model without any specificity for social science research,” she explained. 

The Common Rule was updated in 2018 to ease the level of institutional review for low-risk research techniques (e.g., surveys, educational tests, interviews) that are frequent tools in social and behavioral studies. A special committee of the National Research Council (NRC), chaired by APS Past President Susan Fiske, recommended many of those modifications. Fisher was involved in the NRC committee, along with APS Fellows Richard Nisbett (University of Michigan) and Felice J. Levine (American Educational Research Association), and clinical psychologist Melissa Abraham of Harvard University. But the Common Rule reforms have yet to fully expedite much of the research, partly because the review boards remain confused about exempt categories, Fisher said.  

Interference or support? 

That regulatory confusion has generated sour sentiments toward IRBs. For decades, many social and behavioral scientists have complained that IRBs effectively impede scientific progress through arbitrary questions and objections. 

In a Perspectives on Psychological Science paper they co-authored, APS Fellows Stephen Ceci of Cornell University and Maggie Bruck of Johns Hopkins University discussed an IRB rejection of their plans for a study with 6- to 10-year-old participants. Ceci and Bruck planned to show the children videos depicting a fictional police officer engaging in suggestive questioning of a child.  

“The IRB refused to approve the proposal because it was deemed unethical to show children public servants in a negative light,” they wrote, adding that the IRB held firm on its rejection despite government funders already having approved the study protocol (Ceci & Bruck, 2009). 

Other scientists have complained the IRBs exceed their Common Rule authority by requiring review of studies that are not government funded. In 2011, psychological scientist Jin Li sued Brown University in federal court for barring her from using data she collected in a privately funded study on educational testing. Brown’s IRB objected to the fact that she paid her participants different amounts of compensation based on need. (A year later, the university settled the case with Li.) 

In addition, IRBs often hover over minor aspects of a study that have no genuine relation to participant welfare, Ceci said in an email interview.  

Tuesday, May 17, 2022

Why it’s so damn hard to make AI fair and unbiased

Sigal Samuel
Vox.com
Originally posted 19 APR 2022

Here is an excerpt:

So what do big players in the tech space mean, really, when they say they care about making AI that’s fair and unbiased? Major organizations like Google, Microsoft, even the Department of Defense periodically release value statements signaling their commitment to these goals. But they tend to elide a fundamental reality: Even AI developers with the best intentions may face inherent trade-offs, where maximizing one type of fairness necessarily means sacrificing another.

The public can’t afford to ignore that conundrum. It’s a trap door beneath the technologies that are shaping our everyday lives, from lending algorithms to facial recognition. And there’s currently a policy vacuum when it comes to how companies should handle issues around fairness and bias.

“There are industries that are held accountable,” such as the pharmaceutical industry, said Timnit Gebru, a leading AI ethics researcher who was reportedly pushed out of Google in 2020 and who has since started a new institute for AI research. “Before you go to market, you have to prove to us that you don’t do X, Y, Z. There’s no such thing for these [tech] companies. So they can just put it out there.”

That makes it all the more important to understand — and potentially regulate — the algorithms that affect our lives. So let’s walk through three real-world examples to illustrate why fairness trade-offs arise, and then explore some possible solutions.

How would you decide who should get a loan?

Here’s another thought experiment. Let’s say you’re a bank officer, and part of your job is to give out loans. You use an algorithm to help you figure out whom you should loan money to, based on a predictive model — chiefly taking into account their FICO credit score — about how likely they are to repay. Most people with a FICO score above 600 get a loan; most of those below that score don’t.

One type of fairness, termed procedural fairness, would hold that an algorithm is fair if the procedure it uses to make decisions is fair. That means it would judge all applicants based on the same relevant facts, like their payment history; given the same set of facts, everyone will get the same treatment regardless of individual traits like race. By that measure, your algorithm is doing just fine.

But let’s say members of one racial group are statistically much more likely to have a FICO score above 600 and members of another are much less likely — a disparity that can have its roots in historical and policy inequities like redlining that your algorithm does nothing to take into account.

Another conception of fairness, known as distributive fairness, says that an algorithm is fair if it leads to fair outcomes. By this measure, your algorithm is failing, because its recommendations have a disparate impact on one racial group versus another.

Saturday, February 12, 2022

Privacy and digital ethics after the pandemic

Carissa Véliz
Nature Electronics
VOL 4 | January 2022, 10, 11.

The coronavirus pandemic has permanently changed our relationship with technology, accelerating the drive towards digitization. While this change has brought advantages, such as increased opportunities to work from home and innovations in e-commerce, it has also been accompanied with steep drawbacks,
which include an increase in inequality and undesirable power dynamics.

Power asymmetries in the digital age have been a worry since big tech became big.  Technophiles have often argued that if users are unhappy about online services, they can always opt-out. But opting-out has not felt like a meaningful alternative for years for at least two reasons.  

First, the cost of not using certain services can amount to a competitive disadvantage — from not seeing a job advert to not having access to useful tools being used by colleagues. When a platform becomes too dominant, asking people not to use it is like asking them to refrain from being full participants in society. Second, platforms such as Facebook and Google are unavoidable — no one who has an online life can realistically steer clear of them. Google ads and their trackers creep throughout much of the Internet, and Facebook has shadow profiles on netizens even when they have never had an account on the platform.

(cut)

Reasons for optimism

Despite the concerning trends regarding privacy and digital ethics during the pandemic, there are reasons to be cautiously optimistic about the future.  First, citizens around the world are increasingly suspicious of tech companies, and are gradually demanding more from them. Second, there is a growing awareness that the lack of privacy ingrained in current apps entails a national security risk, which can motivate governments into action. Third, US President Joe Biden seems eager to collaborate with the international community, in contrast to his predecessor. Fourth, regulators in the US are seriously investigating how to curtail tech’s power, as evidenced by the Department of Justice’s antitrust lawsuit against Google and the Federal Trade Commission’s (FTC) antitrust lawsuit against Facebook.  Amazon and YouTube have also been targeted by the FTC for a privacy investigation. With discussions of a federal privacy law becoming more common in the US, it would not be surprising to see such a development in the next few years. Tech regulation in the US could have significant ripple effects elsewhere.

Thursday, January 20, 2022

An existential threat to humanity: Democracy’s decline

Kaushik Basu
Japan Times
Originally posted 24 DEC 21

Here are two excerpts:

Most people do not appreciate the extent to which civilizations depend on pillars of norms and conventions. Some of these have evolved organically over time, while others required deliberation and collective action. If one of the pillars buckles, a civilization could well collapse.

(cut)

When a vast majority of a country’s population is ready to rebel, as seemed to be the case in Belarus in the summer of 2020, and the leader has limited capacity to suppress the uprising, how can he or she prevail?

To address this question, I developed an allegory I call the “Incarceration Game.” Some 1 million citizens of a particular country want to join a rebellion to overthrow the tyrannical leader who can catch and jail at most 100 rebels. With such a low probability of being caught, each person is ready to take to the streets. The leader’s situation looks hopeless.

Suppose he nonetheless announces that he will incarcerate the 100 oldest people who join the uprising. At first sight, it appears that this will not stop the rebellion, because the vast number of young people will have no reason to abandon it. But, if people’s ages are common knowledge, the outcome will be different. After the leader’s announcement, the 100 oldest people will not join the revolt, because the pain of certain incarceration is too great even for a good cause. Knowing this, the next 100 oldest people also will not take part in the revolution, and nor will the 100 oldest people after them. By induction, no one will. The streets will be empty.

Authoritarian rulers’ intentional or unwitting use of such an approach may help to explain why earlier revolts crumbled when on the verge of success. To demonstrate this empirically in history or in recent cases, like that of Belarus or Myanmar, will require data that we do not have yet. The incarceration game is a purely logical conjecture. What it does, importantly, is to remind us that toppling a dictator requires a strategy to foil such a tactic. Good intentions alone are not sufficient; the upholding of democracy needs a strategy based on sound analysis.