Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, October 9, 2023

They Studied Dishonesty. Was Their Work a Lie?

Gideon Lewis-Kraus
The New Yorker
Originally published 30 Sept 23

Here is an excerpt:

Despite a good deal of readily available evidence to the contrary, neoclassical economics took it for granted that humans were rational. Kahneman and Tversky found flaws in this assumption, and built a compendium of our cognitive biases. We rely disproportionately on information that is easily retrieved: a recent news article about a shark attack seems much more relevant than statistics about how rarely such attacks actually occur. Our desires are in flux—we might prefer pizza to hamburgers, and hamburgers to nachos, but nachos to pizza. We are easily led astray by irrelevant details. In one experiment, Kahneman and Tversky described a young woman who had studied philosophy and participated in anti-nuclear demonstrations, then asked a group of participants which inference was more probable: either “Linda is a bank teller” or “Linda is a bank teller and is active in the feminist movement.” More than eighty per cent chose the latter, even though it is a subset of the former. We weren’t Homo economicus; we were giddy and impatient, our thoughts hasty, our actions improvised. Economics tottered.

Behavioral economics emerged for public consumption a generation later, around the time of Ariely’s first book. Where Kahneman and Tversky held that we unconsciously trick ourselves into doing the wrong thing, behavioral economists argued that we might, by the same token, be tricked into doing the right thing. In 2008, Richard Thaler and Cass Sunstein published “Nudge,” which argued for what they called “libertarian paternalism”—the idea that small, benign alterations of our environment might lead to better outcomes. When employees were automatically enrolled in 401(k) programs, twice as many saved for retirement. This simple bureaucratic rearrangement improved a great many lives.

Thaler and Sunstein hoped that libertarian paternalism might offer “a real Third Way—one that can break through some of the least tractable debates in contemporary democracies.” Barack Obama, who hovered above base partisanship, found much to admire in the promise of technocratic tinkering. He restricted his outfit choices mostly to gray or navy suits, based on research into “ego depletion,” or the concept that one might exhaust a given day’s reservoir of decision-making energy. When, in the wake of the 2008 financial crisis, Obama was told that money “framed” as income was more likely to be spent than money framed as wealth, he enacted monthly tax deductions instead of sending out lump-sum stimulus checks. He eventually created a behavioral-sciences team in the White House. (Ariely had once found that our decisions in a restaurant are influenced by whoever orders first; it’s possible that Obama was driven by the fact that David Cameron, in the U.K., was already leaning on a “nudge unit.”)

The nudge, at its best, was modest—even a minor potential benefit at no cost pencilled out. In the Obama years, a pop-up on computers at the Department of Agriculture reminded employees that single-sided printing was a waste, and that advice reduced paper use by six per cent. But as these ideas began to intermingle with those in the adjacent field of social psychology, the reasonable notion that some small changes could have large effects at scale gave way to a vision of individual human beings as almost boundlessly pliable. Even Kahneman was convinced. He told me, “People invented things that shouldn’t have worked, and they were working, and I was enormously impressed by it.” Some of these interventions could be implemented from above. 


Sunday, October 8, 2023

Moral Uncertainty and Our Relationships with Unknown Minds

Danaher, J. (2023). 
Cambridge Quarterly of Healthcare Ethics, 
32(4), 482-495.
doi:10.1017/S0963180123000191

Abstract

We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral decision rules that allow us to either minimize the risks of moral wrongdoing or improve the choice-worthiness of our actions. One particular argument adopted in this literature is the “risk asymmetry argument,” which claims that the risks associated with accepting or rejecting some moral facts may be sufficiently asymmetrical as to warrant favoring a particular practical resolution of this uncertainty. Focusing on the case study of artificial beings, this article argues that this is best understood as an ethical-epistemic challenge. The article argues that taking potential risk asymmetries seriously can help resolve disputes about the status of human–AI relationships, at least in practical terms (philosophical debates will, no doubt, continue); however, the resolution depends on a proper, empirically grounded assessment of the risks involved. Being skeptical about basic moral status, but more open to the possibility of meaningful relationships with such entities, may be the most sensible approach to take.


My take: 

John Danaher explores the ethical challenges of interacting with entities whose moral status is uncertain, such as artificial beings, animals, and patients with locked-in syndrome. Danaher argues that this is best understood as an ethical-epistemic challenge, and that we need to develop meta-moral decision rules that allow us to minimize the risks of moral wrongdoing or improve the choiceworthiness of our actions.

One particular argument that Danaher adopts is the "risk asymmetry argument," which claims that the risks associated with accepting or rejecting some moral facts may be sufficiently asymmetrical as to warrant favoring a particular practical resolution of this uncertainty. In the context of human-AI relationships, Danaher argues that it is more prudent to err on the side of caution and treat AI systems as if they have moral standing, even if we are not sure whether they actually do. This is because the potential risks of mistreating AI systems, such as creating social unrest or sparking an arms race, are much greater than the potential risks of treating them too respectfully.

Danaher acknowledges that this approach may create some tension in our moral views, as it suggests that we should be skeptical about the basic moral status of AI systems, but more open to the possibility of meaningful relationships with them. However, he argues that this is the most sensible approach to take, given the ethical-epistemic challenges that we face.

Saturday, October 7, 2023

AI systems must not confuse users about their sentience or moral status

Schwitzgebel, E. (2023).
Patterns, 4(8), 100818.
https://doi.org/10.1016/j.patter.2023.100818 

The bigger picture

The draft European Union Artificial Intelligence Act highlights the seriousness with which policymakers and the public have begun to take issues in the ethics of artificial intelligence (AI). Scientists and engineers have been developing increasingly more sophisticated AI systems, with recent breakthroughs especially in large language models such as ChatGPT. Some scientists and engineers argue, or at least hope, that we are on the cusp of creating genuinely sentient AI systems, that is, systems capable of feeling genuine pain and pleasure. Ordinary users are increasingly growing attached to AI companions and might soon do so in much greater numbers. Before long, substantial numbers of people might come to regard some AI systems as deserving of at least some limited rights or moral standing, being targets of ethical concern for their own sake. Given high uncertainty both about the conditions under which an entity can be sentient and about the proper grounds of moral standing, we should expect to enter a period of dispute and confusion about the moral status of our most advanced and socially attractive machines.

Summary

One relatively neglected challenge in ethical artificial intelligence (AI) design is ensuring that AI systems invite a degree of emotional and moral concern appropriate to their moral standing. Although experts generally agree that current AI chatbots are not sentient to any meaningful degree, these systems can already provoke substantial attachment and sometimes intense emotional responses in users. Furthermore, rapid advances in AI technology could soon create AIs of plausibly debatable sentience and moral standing, at least by some relevant definitions. Morally confusing AI systems create unfortunate ethical dilemmas for the owners and users of those systems, since it is unclear how those systems ethically should be treated. I argue here that, to the extent possible, we should avoid creating AI systems whose sentience or moral standing is unclear and that AI systems should be designed so as to invite appropriate emotional responses in ordinary users.

My take

The article proposes two design policies for avoiding morally confusing AI systems. The first is to create systems that are clearly non-conscious artifacts. This means that the systems should be designed in a way that makes it clear to users that they are not sentient beings. The second policy is to create systems that are clearly deserving of moral consideration as sentient beings. This means that the systems should be designed to have the same moral status as humans or other animals.

The article concludes that the best way to avoid morally confusing AI systems is to err on the side of caution and create systems that are clearly non-conscious artifacts. This is because it is less risky to underestimate the sentience of an AI system than to overestimate it.

Here are some additional points from the article:
  • The scientific study of sentience is highly contentious, and there is no agreed-upon definition of what it means for an entity to be sentient.
  • Rapid advances in AI technology could soon create AI systems that are plausibly debatable as sentient.
  • Morally confusing AI systems create unfortunate ethical dilemmas for the owners and users of those systems, since it is unclear how those systems ethically should be treated.
  • The design of AI systems should be guided by ethical considerations, such as the need to avoid causing harm and the need to respect the dignity of all beings.

Friday, October 6, 2023

Taking the moral high ground: Deontological and absolutist moral dilemma judgments convey self-righteousness

Weiss, A., Burgmer, P., Rom, S. C., & Conway, P. (2024). 
Journal of Experimental Social Psychology, 110, 104505.

Abstract

Individuals who reject sacrificial harm to maximize overall outcomes, consistent with deontological (vs. utilitarian) ethics, appear warmer, more moral, and more trustworthy. Yet, deontological judgments may not only convey emotional reactions, but also strict adherence to moral rules. We therefore hypothesized that people view deontologists as more morally absolutist and hence self-righteous—as perceiving themselves as morally superior. In addition, both deontologists and utilitarians who base their decisions on rules (vs. emotions) should appear more self-righteous. Four studies (N = 1254) tested these hypotheses. Participants perceived targets as more self-righteous when they rejected (vs. accepted) sacrificial harm in classic moral dilemmas where harm maximizes outcomes (i.e., deontological vs. utilitarian judgments), but not parallel cases where harm fails to maximize outcomes (Study 1). Preregistered Study 2 replicated the focal effect, additionally indicating mediation via perceptions of moral absolutism. Study 3 found that targets who reported basing their deontological judgments on rules, compared to emotional reactions or when processing information was absent, appeared particularly self-righteous. Preregistered Study 4 included both deontological and utilitarian targets and manipulated whether their judgments were based on rules versus emotion (specifically sadness). Grounding either moral position in rules conveyed self-righteousness, while communicating emotions was a remedy. Furthermore, participants perceived targets as more self-righteous the more targets deviated from their own moral beliefs. Studies 3 and 4 additionally examined participants' self-disclosure intentions. In sum, deontological dilemma judgments may convey an absolutist, rule-focused view of morality, but any judgment stemming from rules (in contrast to sadness) promotes self-righteousness perceptions.


My quick take:

The authors also found that people were more likely to perceive deontologists as self-righteous if they based their judgments on rules rather than emotions. This suggests that it is not just the deontological judgment itself that leads to perceptions of self-righteousness, but also the way in which the judgment is made.

Overall, the findings of this study suggest that people who make deontological judgments in moral dilemmas are more likely to be perceived as self-righteous. This is because deontological judgments are often seen as reflecting a rigid and absolutist view of morality, which can come across as arrogant or condescending.

It is important to note that the findings of this study do not mean that all deontologists are self-righteous. However, the study does suggest that people should be aware of how their moral judgments may be perceived by others. If you want to avoid being perceived as self-righteous, it may be helpful to explain your reasons for making a deontological judgment, and to acknowledge the emotional impact of the situation.

Thursday, October 5, 2023

Morality beyond the WEIRD: How the nomological network of morality varies across cultures

Atari, M., Haidt, J., et al. (2023).
Journal of Personality and Social Psychology.
Advance online publication.

Abstract

Moral foundations theory has been a generative framework in moral psychology in the last 2 decades. Here, we revisit the theory and develop a new measurement tool, the Moral Foundations Questionnaire–2 (MFQ-2), based on data from 25 populations. We demonstrate empirically that equality and proportionality are distinct moral foundations while retaining the other four existing foundations of care, loyalty, authority, and purity. Three studies were conducted to develop the MFQ-2 and to examine how the nomological network of moral foundations varies across 25 populations. Study 1 (N = 3,360, five populations) specified a refined top-down approach for measurement of moral foundations. Study 2 (N = 3,902, 19 populations) used a variety of methods (e.g., factor analysis, exploratory structural equations model, network psychometrics, alignment measurement equivalence) to provide evidence that the MFQ-2 fares well in terms of reliability and validity across cultural contexts. We also examined population-level, religious, ideological, and gender differences using the new measure. Study 3 (N = 1,410, three populations) provided evidence for convergent validity of the MFQ-2 scores, expanded the nomological network of the six moral foundations, and demonstrated the improved predictive power of the measure compared with the original MFQ. Importantly, our results showed how the nomological network of moral foundations varied across cultural contexts: consistent with a pluralistic view of morality, different foundations were influential in the network of moral foundations depending on cultural context. These studies sharpen the theoretical and methodological resolution of moral foundations theory and provide the field of moral psychology a more accurate instrument for investigating the many ways that moral conflicts and divisions are shaping the modern world.


Here's my summary:

The article examines how the moral foundations theory (MFT) of morality applies to cultures outside of the Western, Educated, Industrialized, Rich, and Democratic (WEIRD) world. MFT proposes that there are six universal moral foundations: care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, sanctity/degradation, and liberty/oppression. However, previous research has shown that the relative importance of these foundations can vary across cultures.

The authors of the article conducted three studies to examine the nomological network of morality (i.e., the relationships between different moral foundations) in 25 populations. They found that the nomological network of morality varied significantly across cultures. For example, in some cultures, the foundation of care was more strongly related to the foundation of fairness, while in other cultures, the foundation of loyalty was more strongly related to the foundation of authority.

The authors argue that these findings suggest that MFT needs to be revised to take into account cultural variation. They propose that the nomological network of morality is shaped by a combination of universal moral principles and local cultural norms. This means that there is no single "correct" way to think about morality, and that what is considered moral in one culture may not be considered moral in another.

The article's findings have important implications for our understanding of morality and for cross-cultural research. They suggest that we need to be careful about making assumptions about the moral beliefs of people from other cultures. We also need to be aware of the ways in which culture can influence our own moral judgments.

Wednesday, October 4, 2023

Humans’ Bias Blind Spot and Its Societal Significance

Pronin, E., & Hazel, L. (2023).
Current Directions in Psychological Science, 0(0).

Abstract

Human beings have a bias blind spot. We see bias all around us but sometimes not in ourselves. This asymmetry hinders self-knowledge and fuels interpersonal misunderstanding and conflict. It is rooted in cognitive mechanics differentiating self- and social perception as well as in self-esteem motives. It generalizes across social, cognitive, and behavioral biases; begins in childhood; and appears across cultures. People show a bias blind spot in high-stakes contexts, including investing, medicine, human resources, and law. Strategies for addressing the problem are described.

(cut)

Bias-limiting procedures

When it comes to eliminating bias, attempts to overcome it via conscious effort and educational training are not ideal. A different strategy is worth considering, when possible: preventing people’s biases from having a chance to operate in the first place, by limiting their access to biasing information. Examples include conducting auditions behind a screen (discussed earlier) and blind review of journal submissions. If fully blocking access to potentially biasing information is not possible or carries more costs than benefits, another less stringent option is worth considering, that is, controlling when the information is presented so that potentially biasing information comes late, ideally after a tentative judgment is made (e.g., “sequential unmasking”; Dror, 2018; “temporary cloaking”; Kang, 2021).

Because of the BBS, people can be resistant to procedures like this that limit their access to biasing information (see Fig. 3). For example, forensics experts prefer consciously trying to avoid bias over being shielded from even irrelevant biasing information (Kukucka et al., 2017). When high school teachers and ensemble singers were asked to assess blinding procedures (in auditioning and grading), they opposed them more for their own group than for the other group and even more for themselves personally (Pronin et al., 2022). This opposition is consistent with experiments showing that people are unconcerned about the effects of biasing decision processes when it comes to their own decisions (Hansen et al., 2014). In those experiments, participants made judgments using a biasing decision procedure (e.g., judging the quality of paintings only after looking to see if someone famous painted them). They readily acknowledged that the procedure was biased, nonetheless made decisions that were biased by that procedure, and then insisted that their conclusions were objective. This unwarranted confidence is a barrier to the self-imposition of bias-reducing procedures. It suggests the need for adopting procedures like this at the policy level rather than counting on individuals or their organizations to do so.

A different bias-limiting procedure that may induce resistance for these same reasons, and that therefore may also benefit from institutional or policy-level implementation, involves precommitting to decision criteria (e.g., Norton et al., 2004; Uhlmann & Cohen, 2005). For example, the human resources officer who precommits to judging job applicants more on the basis of industry experience versus educational background cannot then change that emphasis after seeing that their favorite candidate has unusually impressive academic credentials. This logic is incorporated, for example, into the system of allocating donor organs in the United States, which has explicit and predetermined criteria for making those allocations in order to avoid the possibility of bias in this high-stakes arena. When decision makers are instructed to provide objective criteria for their decision not before making that decision but rather when providing it—that is, the more typical request made of them—this not only makes bias more likely but also, because of the BBS, may even leave decision makers more confident in their objectivity than if they had not been asked to provide those criteria at all.

Here's my brief summary:

The article discusses the concept of the bias blind spot, which refers to people's tendency to recognize bias in others more readily than in themselves. Studies have consistently shown that people rate themselves as less susceptible to various biases than the average person. The bias blind spot occurs even for well-known biases that people readily accept exist. This blind spot has important societal implications, as it impedes recognition of one's own biases. It also leads to assuming others are more biased than oneself, resulting in decreased trust. Overcoming the bias blind spot is challenging but important for issues from prejudice to politics. It requires actively considering one's own potential biases when making evaluations about oneself or others.

Tuesday, October 3, 2023

Emergent analogical reasoning in large language models

Webb, T., Holyoak, K.J. & Lu, H. 
Nat Hum Behav (2023).
https://doi.org/10.1038/s41562-023-01659-w

Abstract

The recent advent of large language models has reinvigorated debate over whether human cognitive capacities might emerge in such generic models given sufficient training data. Of particular interest is the ability of these models to reason about novel problems zero-shot, without any direct training. In human cognition, this capacity is closely tied to an ability to reason by analogy. Here we performed a direct comparison between human reasoners and a large language model (the text-davinci-003 variant of Generative Pre-trained Transformer (GPT)-3) on a range of analogical tasks, including a non-visual matrix reasoning task based on the rule structure of Raven’s Standard Progressive Matrices. We found that GPT-3 displayed a surprisingly strong capacity for abstract pattern induction, matching or even surpassing human capabilities in most settings; preliminary tests of GPT-4 indicated even better performance. Our results indicate that large language models such as GPT-3 have acquired an emergent ability to find zero-shot solutions to a broad range of analogy problems.

Discussion

We have presented an extensive evaluation of analogical reasoning in a state-of-the-art large language model. We found that GPT-3 appears to display an emergent ability to reason by analogy, matching or surpassing human performance across a wide range of problem types. These included a novel text-based problem set (Digit Matrices) modeled closely on Raven’s Progressive Matrices, where GPT-3 both outperformed human participants, and captured a number of specific signatures of human behavior across problem types. Because we developed the Digit Matrix task specifically for this evaluation, we can be sure GPT-3 had never been exposed to problems of this type, and therefore was performing zero-shot reasoning. GPT-3 also displayed an ability to solve analogies based on more meaningful relations, including four-term verbal analogies and analogies between stories about naturalistic problems.

It is certainly not the case that GPT-3 mimics human analogical reasoning in all respects. Its performance is limited to the processing of information provided in its local context. Unlike humans, GPT-3 does not have long-term memory for specific episodes. It is therefore unable to search for previously-encountered situations that might create useful analogies with a current problem. For example, GPT-3 can use the general story to guide its solution to the radiation problem, but as soon as its context buffer is emptied, it reverts to giving its non-analogical solution to the problem – the system has learned nothing from processing the analogy. GPT-3’s reasoning ability is also limited by its lack of physical understanding of the world, as evidenced by its failure (in comparison with human children) to use an analogy to solve a transfer problem involving construction and use of simple tools. GPT-3’s difficulty with this task is likely due at least in part to its purely text-based input, lacking the multimodal experience necessary to build a more integrated world model.

But despite these major caveats, our evaluation reveals that GPT-3 exhibits a very general capacity to identify and generalize – in zero-shot fashion – relational patterns to be found within both formal problems and meaningful texts. These results are extremely surprising. It is commonly held that although neural networks can achieve a high level of performance within a narrowly-defined task domain, they cannot robustly generalize what they learn to new problems in the way that human learners do. Analogical reasoning is typically viewed as a quintessential example of this human capacity for abstraction and generalization, allowing human reasoners to intelligently approach novel problems zero-shot.

Monday, October 2, 2023

Research: How One Bad Employee Can Corrupt a Whole Team

Stephen Dimmock & William Gerken
Harvard Business Review
Originally posted 5 March 2018

Here is an excerpt:

In our research, we wanted to understand just how contagious bad behavior is. To do so, we examined peer effects in misconduct by financial advisors, focusing on mergers between financial advisory firms that each have multiple branches. In these mergers, financial advisors meet new co-workers from one of the branches of the other firm, exposing them to new ideas and behaviors.

We collected an extensive data set using the detailed regulatory filings available for financial advisors. We defined misconduct as customer complaints for which the financial advisor either paid a settlement of at least $10,000 or lost an arbitration decision. We observed when complaints occurred for each financial advisor, as well as for the advisor’s co-workers.

We found that financial advisors are 37% more likely to commit misconduct if they encounter a new co-worker with a history of misconduct. This result implies that misconduct has a social multiplier of 1.59 — meaning that, on average, each case of misconduct results in an additional 0.59 cases of misconduct through peer effects.

However, observing similar behavior among co-workers does not explain why this similarity occurs. Co-workers could behave similarly because of peer effects – in which workers learn behaviors or social norms from each other — but similar behavior could arise because co-workers face the same incentives or because individuals prone to making similar choices naturally choose to work together.

In our research, we wanted to understand how peer effects contribute to the spread of misconduct. We compared financial advisors across different branches of the same firm, because this allowed us to control for the effect of the incentive structure faced by all advisors in the firm. We also focused on changes in co-workers caused by mergers, because this allowed us to remove the effect of advisors choosing their co-workers. As a result, we were able to isolate peer effects.


Here is my summary: 

The article discusses a study that found that even the most honest employees are more likely to commit misconduct if they work alongside a dishonest individual. The study, which was conducted by researchers at the University of California, Irvine, found that financial advisors were 37% more likely to commit misconduct if they encountered a new co-worker with a history of misconduct.

The researchers believe that this is because people are more likely to learn bad behavior than good behavior. When we see someone else getting away with misconduct, it can make us think that it's okay to do the same thing. Additionally, when we're surrounded by people who are behaving badly, it can create a culture of acceptance for misconduct.

Sunday, October 1, 2023

US Surgeons Are Killing Themselves at an Alarming Rate

Christina Frangou
The Guardian
Originally published 26 Sept 23

Here is an excerpt:

Fifty years ago, in a landmark report called The Sick Physician, the American Medical Association declared physician impairment by psychiatric disorders, alcoholism and drug use a widespread problem. Even then, physicians had rates of narcotic addiction 30 to 100 times higher than the general population, and about 100 doctors a year in the US died by suicide.

The report called for better support for physicians who were struggling with mental health or addictions. Too many doctors hid their ailments because they worried about losing their licenses or the respect of their communities, according to the medical association.

Following the publication, state medical societies in the US, the organizations that give physicians license to practice, created confidential programs to help sick and impaired doctors. Physician health programs have a dual purpose: they connect doctors to treatment, and they assess the physician to ensure that patients are safe in their care. If a doctor’s condition is considered a threat to patient safety, the program may recommend that a doctor immediately cease practice, or they may recommend that a physician undergo drug and alcohol monitoring for three to five years in order to maintain their license. The client must sign an agreement not to participate in patient care until their personal health is addressed.

In rare and extreme cases, the physician health program will report the doctor to the state medical board to revoke their license.


Here is my summary:

The article sheds light on a distressing phenomenon in the United States: an alarming increase in suicide rates among surgeons. It underscores the severity of this issue by featuring a courageous surgeon who has taken the initiative to address it openly. The article suggests that the mental health and well-being of surgeons are under significant strain, potentially due to the demanding nature of their profession, and it calls for greater awareness and support to tackle this growing crisis. The featured surgeon's decision to speak out serves as a poignant reminder of the urgent need to address the mental health challenges faced by medical professionals.

The article underscores the critical issue of high suicide rates among U.S. surgeons, with a particular focus on the brave act of a surgeon who has chosen to raise awareness about this problem. It highlights the pressing need for comprehensive mental health support within the medical community to address the unique stressors that surgeons encounter in their line of work.