Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, July 9, 2023

Perceptions of Harm and Benefit Predict Judgments of Cultural Appropriation

Mosley, A. J., Heiphetz, L., et al. (2023).
Social Psychological and Personality Science, 
19485506231162401.

Abstract

What factors underlie judgments of cultural appropriation? In two studies, participants read 157 scenarios involving actors using cultural products or elements of racial/ethnic groups to which they did not belong. Participants evaluated scenarios on seven dimensions (perceived cultural appropriation, harm to the community from which the cultural object originated, racism, profit to actors, extent to which cultural objects represent a source of pride for source communities, benefits to actors, and celebration), while the type of cultural object and the out-group associated with the object being appropriated varied. Using both the scenario and the participant as the units of analysis, perceived cultural appropriation was most strongly associated with perceived greater harm to the source community. We discuss broader implications for integrating research on inequality and moral psychology. Findings also have translational implications for educators and activists interested in increasing awareness about cultural appropriation.

General Discussion

People disagree about what constitutes cultural appropriation (Garcia Navaro, 2021). Prior research has indicated that prototypical cases of cultural appropriation include dominant-group members (e.g., White people) using cultural products stemming from subordinated groups (e.g., Black people; Katzarska-Miller et al., 2020; Mosley & Biernat, 2020). Minority group members’ use of dominant-group cultural products (termed “cultural dominance” by Rogers, 2006) is less likely to receive that label. However, even in prototypical cases, considerable variability in perceptions exists across actions (Mosley & Biernat, 2020). Furthermore, some perceivers—especially highly racially identified White Americans—view Black actors’ use of White cultural products as equally or more appropriative than White actors’ use of Black cultural products (Mosley et al., 2022).

These studies build on extant work by examining how features of out-group cultural use might contribute to construals of appropriation. We created a large set of scenarios, extending beyond the case of White–Black relations to include a greater diversity of racial groups (Native American, Hispanic, and Asian cultures). In all three studies, scenario-level analyses indicated that actions perceived to cause harm to the source community were also likely to be seen as appropriative, and those actions perceived to bring benefits to actors were less likely to be seen as appropriative. The strong connection between perceived source community harm and judgments of cultural appropriation corroborates research on the importance of harm to morally relevant judgments (Gray et al., 2014; Rozin & Royzman, 2001). At the same time, scenarios perceived to benefit actors—at least among the particular set of scenarios used here—were those that elicited a lower appropriation essence. However, at the level of individual perceivers, actor benefit (along with actor profit and some other measures) positively predicted appropriation perceptions. Perceiving benefit to an actor may contribute to a sense that the action is problematic to the source community (i.e., appropriative). Our findings are akin to findings on smoking and life expectancy: At the aggregate level, countries with higher rates of cigarette consumption have longer population life expectancies, but at the individual level, the more one smokes, the lower their life expectancy (Krause & Saunders, 2010). Scenarios that bring more benefit to actors are judged less appropriative, but individuals who see actor benefit in scenarios view them as more appropriative.

In all studies, participants perceived actions as more appropriative when White actors engaged with cultural products from Black communities, rather than the reverse pattern. This provides further evidence that the prototypical perpetrator of cultural appropriation is a high-status group member (Mosley & Biernat, 2020), where high-status actors have greater power and resources to exploit, marginalize, and cause harm to low-status source communities (Rogers, 2006).

Perhaps surprisingly, perceived appropriation and perceived celebration were positively correlated. Appropriation and celebration might be conceptualized as alternative, opposing construals of the same event. But this positive correlation may attest to the ambiguity, subjectivity, and disagreement about perceiving cultural appropriation: The same action may be construed as appropriative and (not or) celebratory. However, these construals were nonetheless distinct: Appropriation was positively correlated with perceived racism and harm, but celebration was negatively correlated with these factors.

Saturday, July 8, 2023

Microsoft Scraps Entire Ethical AI Team Amid AI Boom

Lauren Leffer
gizmodo.com
Updated on March 14, 2023
Still relevant

Microsoft is currently in the process of shoehorning text-generating artificial intelligence into every single product that it can. And starting this month, the company will be continuing on its AI rampage without a team dedicated to internally ensuring those AI features meet Microsoft’s ethical standards, according to a Monday night report from Platformer.

Microsoft has scrapped its whole Ethics and Society team within the company’s AI sector, as part of ongoing layoffs set to impact 10,000 total employees, per Platformer. The company maintains its Office of Responsible AI, which creates the broad, Microsoft-wide principles to govern corporate AI decision making. But the ethics and society taskforce, which bridged the gap between policy and products, is reportedly no more.

Gizmodo reached out to Microsoft to confirm the news. In response, a company spokesperson sent the following statement:
Microsoft remains committed to developing and designing AI products and experiences safely and responsibly. As the technology has evolved and strengthened, so has our investment, which at times has meant adjusting team structures to be more effective. For example, over the past six years we have increased the number of people within our product teams who are dedicated to ensuring we adhere to our AI principles. We have also increased the scale and scope of our Office of Responsible AI, which provides cross-company support for things like reviewing sensitive use cases and advocating for policies that protect customers.

To Platformer, the company reportedly previously shared this slightly different version of the same statement:

Microsoft is committed to developing AI products and experiences safely and responsibly...Over the past six years we have increased the number of people across our product teams within the Office of Responsible AI who, along with all of us at Microsoft, are accountable for ensuring we put our AI principles into practice...We appreciate the trailblazing work the ethics and society team did to help us on our ongoing responsible AI journey.

Note that, in this older version, Microsoft does inadvertently confirm that the ethics and society team is no more. The company also previously specified staffing increases were in the Office of Responsible AI vs people generally “dedicated to ensuring we adhere to our AI principles.”

Yet, despite Microsoft’s reassurances, former employees told Platformer that the Ethics and Society team played a key role translating big ideas from the responsibility office into actionable changes at the product development level.

The info is here.

Friday, July 7, 2023

The Dobbs Decision — Exacerbating U.S. Health Inequity

Harvey, S. M., et al. (2023).
New England Journal of Medicine, 
388(16), 1444–1447. 

Here is an excerpt:

In 2019, half of U.S. women living below the FPL were insured by Medicaid. Medicaid coverage rates were higher in certain groups, including women who described their health as fair or poor, women from marginalized racial or ethnic groups, and single mothers. Approximately two thirds of adult women enrolled in Medicaid are in their reproductive years and are potentially at risk for an unintended pregnancy. For many low-income people, however, federal and state funding restrictions created substantial financial and other barriers to accessing abortion services even before Dobbs. Notably, the Hyde Amendment greatly disadvantaged low-income people by blocking use of federal Medicaid funds for abortion services except in cases of rape or incest or to save the pregnant person’s life. In 32 states, Medicaid programs adhere to the strict guidelines of the Hyde Amendment, making it difficult for low-income people to access abortion services in these states.

Before the fall of Roe, Medicaid coverage could determine whether women in some states did or did not receive abortion services. Since the implementation of the post-Dobbs abortion bans, abortion care is even more restricted in entire regions of the country. Access to abortion services under Medicaid will continue to vary by place of residence and depend on the confluence of restrictions or bans on abortion care and Medicaid policies currently in effect within each state. In the new landscape (see map), obtaining abortion services has become even more challenging for low-income women in most of the country, despite the fact that most states have expanded Medicaid coverage.

After Dobbs, complete or partial bans on abortion went into effect in more than a dozen states, forcing people in those states to travel to other states to access abortion care. More than a third of women of reproductive age now live more than an hour from an abortion facility and will probably face additional barriers, including costs for travel and child care and the need to take time off from work. Regrettably, people who already had poorer-than-average access pre-Dobbs face even greater health burdens and risks. For example, members of marginalized racial and ethnic groups that face disproportionate burdens of pregnancy-related mortality are more likely than other groups to have to travel longer distances to get an abortion post-Dobbs.

As a result of the overturning of Roe, a substantial proportion of people who want abortion services will not have access to them and will end up carrying their pregnancies to term. For decades, research has demonstrated that abortion bans most severely affect low-income women and marginalized racial and ethnic groups that already struggle with barriers to accessing health care, including abortion. The economic, educational, and physical and mental health consequences of being denied a wanted abortion have been thoroughly documented in the landmark Turnaway Study. Thanks to nearly 50 years of legal abortion practice, we now have a robust body of research on the safety and efficacy of abortion and the impact of abortion restrictions on people’s socioeconomic circumstances, health, and well-being.

Innovative strategies, such as telemedicine for medication abortion services, can improve access to abortion care. Self-managed, at home, medication abortions are safe, effective, and acceptable to many patients. In states where abortions are legal that are bordered by states where abortions are banned, telemedicine could mean the difference between patients being able to simply drive across the state line, in order to be physically in the state providing care, and having to drive to a clinic that could be hundreds of miles away. In addition, Planned Parenthood affiliates have plans to launch mobile services and to open clinics along state borders where abortion is illegal in one state but legal in the other.

Thursday, July 6, 2023

An empirical perspective on moral expertise: Evidence from a global study of philosophers

Niv, Y., & Sulitzeanu‐Kenan, R. (2022).
Bioethics, 36(9), 926–935.
https://doi.org/10.1111/bioe.13079

Abstract

Considerable attention in bioethics has been devoted to moral expertise and its implications for handling applied moral problems. The existence and nature of moral expertise has been a contested topic, and particularly, whether philosophers are moral experts. In this study, we put the question of philosophers’ moral expertise in a wider context, utilizing a novel and global study among 4,087 philosophers from 96 countries. We find that despite the skepticism in recent literature, the vast majority of philosophers do believe in moral expertise and in the contribution of philosophical training and experience to its acquisition. Yet, they still differ on what philosophers’ moral expertise consists of. While they widely accept that philosophers possess superior analytic abilities regarding moral matters, they diverge on whether they also possess improved ability to judge moral problems. Nonetheless, most philosophers in our sample believe that philosophers possess an improved ability to both analyze and judge moral problems and that they commonly see these two capacities as going hand in hand. We also point at significant associations between personal and professional attributes and philosophers’ beliefs, such as age, working in the field of moral philosophy, public involvement, and association with the analytic tradition. We discuss the implications of these findings for the debate about moral expertise.

From the Discussion section

First, the distribution of philosophers’ beliefs regarding moral expertise highlights that despite the recent skepticism regarding philosophers’ moral expertise, as expressed in the bioethical literature, the vast majority of philosophers do believe in moral expertise and in the contribution of philosophical training and experience to its acquisition. The view, which holds that philosophers are not moral experts, that is, lack an advantage in both moral analysis and judgment capacities, is held by a relatively small minority (estimated at 10.7%). Yet, the findings suggest that philosophers still differ regarding what their moral expertise consists of and highlight that the crux of the debate is not whether philosophers are better moral analyzers, as a near consensus of 88.33% exists that they are. Rather opinions diverge over whether philosophers are also better moral judgers. We estimated that 38.88% of respondents believe that philosophers are only narrow moral experts while 49.45% of them believe that they are broad moral experts.

These findings can primarily be of great sociological interest. They map the views of a global sample of philosophers regarding the ancient question of the merit of philosophy, reflect what philosophers think their profession enables them to do, and consequently, what they might contribute to society. As our findings indicate, for its practitioners, philosophy is not merely an abstract reflection, but also an endeavor that facilitates moral capabilities that can be of use to handle the moral problems we confront in our daily lives.

Furthermore, we may carefully consider the possibility that the distribution of philosophers’ beliefs can also play an evidential role in the dispute about moral expertise. On the one hand, philosophers, more than others, may be best suited to accurately evaluate the merits and limitations of their capabilities. They have gained extensive experience in reflecting on philosophical matters, thus they might better understand what philosophical inquiry requires and how well they have successfully handled such tasks in the past. Their beliefs might express collective wisdom that indicates what the right answers are. As an illustration, the fact that many physicians, with years of experience in medicine, similarly trust their ability to effectively diagnose and treat illness, gives us good reasons to believe that they are. We have good reasons to believe that physicians will better know their merits and limitations. Therefore, the finding that the majority of philosophers believe that their training and experience grant them better ability to both analyze and judge moral problems offers some evidence in favor of this view.

Wednesday, July 5, 2023

Taxonomy of Risks posed by Language Models

Weidinger, L., Uesato, J., et al. (2022, March).
In Proceedings of the 2022 ACM Conference on 
Fairness, Accountability, and Transparency
(pp. 19-30).
Association for Computing Machinery.

Abstract

Responsible innovation on large-scale Language Models (LMs) requires foresight into and in-depth understanding of the risks these models may pose. This paper develops a comprehensive taxonomy of ethical and social risks associated with LMs. We identify twenty-one risks, drawing on expertise and literature from computer science, linguistics, and the social sciences. We situate these risks in our taxonomy of six risk areas: I. Discrimination, Hate speech and Exclusion, II. Information Hazards, III. Misinformation Harms, IV. Malicious Uses, V. Human-Computer Interaction Harms, and VI. Environmental and Socioeconomic harms. For risks that have already been observed in LMs, the causal mechanism leading to harm, evidence of the risk, and approaches to risk mitigation are discussed. We further describe and analyse risks that have not yet been observed but are anticipated based on assessments of other language technologies, and situate these in the same taxonomy. We underscore that it is the responsibility of organizations to engage with the mitigations we discuss throughout the paper. We close by highlighting challenges and directions for further research on risk evaluation and mitigation with the goal of ensuring that language models are developed responsibly.

Conclusion

In this paper, we propose a comprehensive taxonomy to structure the landscape of potential ethical and social risks associated with large-scale language models (LMs). We aim to support the research programme toward responsible innovation on LMs, broaden the public discourse on ethical and social risks related to LMs, and break risks from LMs into smaller, actionable pieces to facilitate their mitigation. More expertise and perspectives will be required to continue to build out this taxonomy of potential risks from LMs. Future research may also expand this taxonomy by applying additional methods such as case studies or interviews. Next steps building on this work will be to engage further perspectives, to innovate on analysis and evaluation methods, and to build out mitigation tools, working toward the responsible innovation of LMs.


Here is a summary of each of the six categories of risks:
  • Discrimination: LLMs can be biased against certain groups of people, leading to discrimination in areas such as employment, housing, and lending.
  • Hate speech and exclusion: LLMs can be used to generate hate speech and other harmful content, which can lead to exclusion and violence.
  • Information hazards: LLMs can be used to spread misinformation, which can have a negative impact on society.
  • Misinformation harms: LLMs can be used to create deepfakes and other forms of synthetic media, which can be used to deceive people.
  • Malicious uses: LLMs can be used to carry out malicious activities such as hacking, fraud, and terrorism.
  • Human-computer interaction harms: LLMs can be used to create addictive and harmful applications, which can have a negative impact on people's mental health.
  • Environmental and socioeconomic harms: LLMs can be used to consume large amounts of energy and data, which can have a negative impact on the environment and society.

Tuesday, July 4, 2023

A computational model of responsibility judgments from counterfactual simulations and intention inferences

Wu S. A., Sridhar S., Gerstenberg T. (2023).
In Proceedings of the 45th Annual Conference 
of the Cognitive Science.

Abstract

How responsible someone is for an outcome depends on what causal role their actions played, and what those actions reveal about their mental states, such as their intentions. In this paper, we develop a computational account of responsibility attribution that integrates these two cognitive processes: causal attribution and mental state inference. Our model makes use of a shared generative planning algorithm assumed to approximate people’s intuitive theory of mind about others’ behavior. We test our model on a variety of animated social scenarios in two experiments. Experiment 1 features simple cases of helping and hindering. Experiment 2 features more complex interactions that require recursive reasoning, including cases where one agent affects another by merely signaling their intentions without physically acting on the world. Across both experiments, our model accurately captures participants’ counterfactual simulations and intention inferences, and establishes that these two factors together explain responsibility judgments.

Conclusion

In this paper, we developed and tested a computational model of responsibility judgments that bridges mechanisms of counterfactual simulation and intention inference using a shared underlying generative planner. The planner captures people’s intuitive theory of mind about agents’ behavior. Across a variety of animated scenarios, our model captured participants’ counterfactual simulations and intention inferences. Together, these two components predicted responsibility judgments better than alternative models of effort, heuristics, or either component alone. This model brings us closer to a formal, comprehensive understanding of how people attribute responsibility.


Here are some of the key findings of the article:
  • Responsibility judgments are based on both causal attribution and mental state inference.
  • Counterfactual simulations and intention inferences are important cognitive processes that underlie responsibility judgments.
  • The model provides a more comprehensive account of responsibility judgments than previous models.
  • The model could be used to improve the performance of artificial agents in social settings.
  • The model could be used to better understand how humans make responsibility judgments.

Monday, July 3, 2023

Is Avoiding Extinction from AI Really an Urgent Priority?

S. Lazar, J, Howard, & A. Narayanan
fast.ai
Originally posted 30 May 23

Here is an excerpt:

And why focus on extinction in particular? Bad as it would be, as the preamble to the statement notes AI poses other serious societal-scale risks. And global priorities should be not only important, but urgent. We’re still in the middle of a global pandemic, and Russian aggression in Ukraine has made nuclear war an imminent threat. Catastrophic climate change, not mentioned in the statement, has very likely already begun. Is the threat of extinction from AI equally pressing? Do the signatories believe that existing AI systems or their immediate successors might wipe us all out? If they do, then the industry leaders signing this statement should immediately shut down their data centres and hand everything over to national governments. The researchers should stop trying to make existing AI systems safe, and instead call for their elimination.

We think that, in fact, most signatories to the statement believe that runaway AI is a way off yet, and that it will take a significant scientific advance to get there—one that we cannot anticipate, even if we are confident that it will someday occur. If this is so, then at least two things follow.

First, we should give more weight to serious risks from AI that are more urgent. Even if existing AI systems and their plausible extensions won’t wipe us out, they are already causing much more concentrated harm, they are sure to exacerbate inequality and, in the hands of power-hungry governments and unscrupulous corporations, will undermine individual and collective freedom. We can mitigate these risks now—we don’t have to wait for some unpredictable scientific advance to make progress. They should be our priority. After all, why would we have any confidence in our ability to address risks from future AI, if we won’t do the hard work of addressing those that are already with us?

Second, instead of alarming the public with ambiguous projections about the future of AI, we should focus less on what we should worry about, and more on what we should do. The possibly extreme risks from future AI systems should be part of that conversation, but they should not dominate it. We should start by acknowledging that the future of AI—perhaps more so than of pandemics, nuclear war, and climate change—is fundamentally within our collective control. We need to ask, now, what kind of future we want that to be. This doesn’t just mean soliciting input on what rules god-like AI should be governed by. It means asking whether there is, anywhere, a democratic majority for creating such systems at all.

Sunday, July 2, 2023

Predictable, preventable medical errors kill thousands yearly. Is it getting any better?

Karen Weintraub
USAToday.com
Originally posted 3 May 23

Here are two excerpts:

A 2017 study put the figure at over 250,000 a year, making medical errors the nation's third leading cause of death at the time. There are no more recent figures.

But the pandemic clearly worsened patient safety, with Leapfrog's new assessment showing increases in hospital-acquired infections, including urinary tract and drug-resistant staph infections as well as infections in central lines ‒ tubes inserted into the neck, chest, groin, or arm to rapidly provide fluids, blood or medications. These infections spiked to a 5-year high during the pandemic and remain high.

"Those are really terrible declines in performance," Binder said.

Patient safety: 'I've never ever, ever seen that'

Not all patient safety news is bad. In one study published last year, researchers examined records from 190,000 patients discharged from hospitals nationwide after being treated for a heart attack, heart failure, pneumonia or major surgery. Patients saw far fewer bad events following treatment for those four conditions, as well as for adverse events caused by medications, hospital-acquired infections, and other factors.

It was the first study of patient safety that left Binder optimistic. "This was improvement and I've never ever, ever seen that," she said.

(cut)

On any given day now, 1 of every 31 hospitalized patients acquires an infection while hospitalized, according to a recent study from the Centers for Disease Control and Prevention. This costs health care systems at least $28.4 billion each year and accounts for an additional $12.4 billion from lost productivity and premature deaths.

"That blew me away," said Shaunte Walton, system director of Clinical Epidemiology & Infection Prevention at UCLA Health. Electronic tools can help, but even with them, "there's work to do to try to operationalize them," she said.

The patient experience also slipped during the pandemic. According to Leapfrog's latest survey, patients reported declines in nurse communication, doctor communication, staff responsiveness, communication about medicine and discharge information.

Boards and leadership teams are "highly distracted" right now with workforce shortages, new payment systems, concerns about equity and decarbonization, said Dr. Donald Berwick, president emeritus and senior fellow at the Institute for Healthcare Improvement and former administrator of the Centers for Medicare & Medicaid Services.

Saturday, July 1, 2023

Inducing anxiety in large language models increases exploration and bias

Coda-Forno, J., Witte, K., et al. (2023).
arXiv preprint arXiv:2304.11111.

Abstract

Large language models are transforming research on machine learning while galvanizing public debates. Understanding not only when these models work well and succeed but also why they fail and misbehave is of great societal relevance. We propose to turn the lens of computational psychiatry, a framework used to computationally describe and modify aberrant behavior, to the outputs produced by these models. We focus on the Generative Pre-Trained Transformer 3.5 and subject it to tasks commonly studied in psychiatry. Our results show that GPT-3.5 responds robustly to a common anxiety questionnaire, producing higher anxiety scores than human subjects. Moreover, GPT-3.5's responses can be predictably changed by using emotion-inducing prompts. Emotion-induction not only influences GPT-3.5's behavior in a cognitive task measuring exploratory decision-making but also influences its behavior in a previously-established task measuring biases such as racism and ableism. Crucially, GPT-3.5 shows a strong increase in biases when prompted with anxiety-inducing text. Thus, it is likely that how prompts are communicated to large language models has a strong influence on their behavior in applied settings. These results progress our understanding of prompt engineering and demonstrate the usefulness of methods taken from computational psychiatry for studying the capable algorithms to which we increasingly delegate authority and autonomy.

From the Discussion section

What do we make of these results? It seems like GPT-3.5 generally performs best in the neutral condition, so a clear recommendation for prompt-engineering is to try and describe a problem as factually and neutrally as possible. However, if one does use emotive language, then our results show that anxiety-inducing scenarios lead to worse performance and substantially more biases. Of course, the neutral conditions asked GPT-3.5 to talk about something it knows, thereby possibly already contextualizing the prompts further in tasks that require knowledge and measure performance. However, that anxiety-inducing prompts can lead to more biased outputs could have huge consequences in applied scenarios. Large language models are, for example, already used in clinical settings and other high-stake contexts. If they produce higher biases in situations when a user speaks more anxiously, then their outputs could actually become dangerous. We have shown one method, which is to run psychiatric studies, that could capture and prevent such biases before they occur.

In the current work, we intended to show the utility of using computational psychiatry to understand foundation models. We observed that GPT-3.5 produced on average higher anxiety scores than human participants. One possible explanation for these results could be that GPT-3.5’s training data, which consists of a lot of text taken from the internet, could have inherently shown such a bias, i.e. containing more anxious than happy statements. Of course, large language models have just become good enough to perform psychological tasks, and whether or not they intelligently perform them is still a matter of ongoing debate.