Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, April 25, 2023

Responsible Agency and the Importance of Moral Audience

Jefferson, A., Sifferd, K. 
Ethic Theory Moral Prac (2023).
https://doi.org/10.1007/s10677-023-10385-1

Abstract

Ecological accounts of responsible agency claim that moral feedback is essential to the reasons-responsiveness of agents. In this paper, we discuss McGeer’s scaffolded reasons-responsiveness account in the light of two concerns. The first is that some agents may be less attuned to feedback from their social environment but are nevertheless morally responsible agents – for example, autistic people. The second is that moral audiences can actually work to undermine reasons-responsiveness if they espouse the wrong values. We argue that McGeer’s account can be modified to handle both problems. Once we understand the specific roles that moral feedback plays for recognizing and acting on moral reasons, we can see that autistics frequently do rely on such feedback, although it often needs to be more explicit. Furthermore, although McGeer is correct to highlight the importance of moral feedback, audience sensitivity is not all that matters to reasons-responsiveness; it needs to be tempered by a consistent application of moral rules. Agents also need to make sure that they choose their moral audiences carefully, paying special attention to receiving feedback from audiences which may be adversely affected by their actions.

Conclusions

In this paper we raised two challenges to McGeer’s scaffolded reasons-responsiveness account: agents who are less attuned to social feedback such as autistics, and corrupting moral audiences. We found that, once we parsed the two roles that feedback from a moral audience play, autistics provide reasons to revise the scaffolded reasons-responsiveness account. We argued that autistic persons, like neurotypicals, wish to justify their behaviour to a moral audience and rely on their moral audience for feedback. However, autistic persons may need more explicit feedback when it comes to effects their behaviour has on others. They also compensate for difficulties they have in receiving information from the moral audience by justifying action through appeal to moral rules. This shows that McGeer’s view of moral agency needs to include observance of moral rules as a way of reducing reliance on audience feedback. We suspect that McGeer would approve of this proposal, as she mentions that an instance of blame can lead to vocal protest by the target, and a possible renegotiation of norms and rules for what constitutes acceptable behaviour (2019). Consideration of corrupting audiences highlights a different problem from that of resisting blame and renegotiating norms. It draws attention to cases where individual agents must try to go beyond what is accepted in their moral environment, a significant challenge for social beings who rely strongly on moral audiences in developing and calibrating their moral reasons-responsiveness. Resistance to a moral audience requires the capacity to evaluate the action differently; often this will be with reference to a moral rule or principle.

For both neurotypical and autistic individuals, consistent application of moral rules or principles can reinforce and bring back to mind important moral commitments when we are led astray by our own desires or specific (im)moral audiences. But moral audiences still play a crucial role to developing and maintaining reasons-responsiveness. First, they are essential to the development and maintenance of all agents’ moral sensitivity. Second, they can provide an important moral corrective where people may have moral blindspots, especially when they provide insights into ways in which a person has fallen short morally by not taking on board reasons that are not obvious to them. Often, these can be reasons which pertain to the respectful treatment of others who are in some important way different from that person.


In sum: Be responsible and accountable in your actions, as your moral audience is always watching. Doing the right thing matters not just for your reputation, but for the greater good. #ResponsibleAgency #MoralAudience

Monday, April 24, 2023

ChatGPT in the Clinic? Medical AI Needs Ethicists

Emma Bedor Hiland
The Hastings Center
Originally published by 10 MAR 23

Concerns about the role of artificial intelligence in our lives, particularly if it will help us or harm us, improve our health and well-being or work to our detriment, are far from new. Whether 2001: A Space Odyssey’s HAL colored our earliest perceptions of AI, or the much more recent M3GAN, these questions are not unique to the contemporary era, as even the ancient Greeks wondered what it would be like to live alongside machines.

Unlike ancient times, today AI’s presence in health and medicine is not only accepted, it is also normative. Some of us rely upon FitBits or phone apps to track our daily steps and prompt us when to move or walk more throughout our day. Others utilize chatbots available via apps or online platforms that claim to improve user mental health, offering meditation or cognitive behavioral therapy. Medical professionals are also open to working with AI, particularly when it improves patient outcomes. Now the availability of sophisticated chatbots powered by programs such as OpenAI’s ChatGPT have brought us closer to the possibility of AI becoming a primary source in providing medical diagnoses and treatment plans.

Excitement about ChatGPT was the subject of much media attention in late 2022 and early 2023. Many in the health and medical fields were also eager to assess the AI’s abilities and applicability to their work. One study found ChatGPT adept at providing accurate diagnoses and triage recommendations. Others in medicine were quick to jump on its ability to complete administrative paperwork on their behalf. Other research found that ChatGPT reached, or came close to reaching, the passing threshold for United States Medical Licensing Exam.

Yet the public at large is not as excited about an AI-dominated medical future. A study from the Pew Research Center found that most Americans are “uncomfortable” with the prospect of AI-provided medical care. The data also showed widespread agreement that AI will negatively affect patient-provider relationships, and that the public is concerned health care providers will adopt AI technologies too quickly, before they fully understanding the risks of doing so.


In sum: As AI is increasingly used in healthcare, this article argues that there is a need for ethical considerations and expertise to ensure that these systems are designed and used in a responsible and beneficial manner. Ethicists can play a vital role in evaluating and addressing the ethical implications of medical AI, particularly in areas such as bias, transparency, and privacy.

Sunday, April 23, 2023

Produced and counterfactual effort contribute to responsibility attributions in collaborative tasks

Xiang, Y., Landy, J., et al. (2023, March 8). 
PsyArXiv
https://doi.org/10.31234/osf.io/jc3hk

Abstract

How do people judge responsibility in collaborative tasks? Past work has proposed a number of metrics that people may use to attribute blame and credit to others, such as effort, competence, and force. Some theories consider only the produced effort or force (individuals are more responsible if they produce more effort or force), whereas others consider counterfactuals (individuals are more responsible if some alternative behavior on their or their collaborator's part could have altered the outcome). Across four experiments (N = 717), we found that participants’ judgments are best described by a model that considers both produced and counterfactual effort. This finding generalized to an independent validation data set (N = 99). Our results thus support a dual-factor theory of responsibility attribution in collaborative tasks.

General discussion

Responsibility for the outcomes of collaborations is often distributed unevenly. For example, the lead author on a project may get the bulk of the credit for a scientific discovery, the head of a company may  shoulder the blame for a failed product, and the lazier of two friends may get the greater share of blame  for failing to lift a couch.  However, past work has provided conflicting accounts of the computations that drive responsibility attributions in collaborative tasks.  Here, we compared each of these accounts against human responsibility attributions in a simple collaborative task where two agents attempted to lift a box together.  We contrasted seven models that predict responsibility judgments based on metrics proposed in past work, comprising three production-style models (Force, Strength, Effort), three counterfactual-style models (Focal-agent-only, Non-focal-agent-only, Both-agent), and one Ensemble model that combines the best-fitting production- and counterfactual-style models.  Experiment 1a and Experiment 1b showed that theEffort model and the Both-agent counterfactual model capture the data best among the production-style models and the counterfactual-style models, respectively.  However, neither provided a fully adequate fit on their own.  We then showed that predictions derived from the average of these two models (i.e., the Ensemble model) outperform all other models, suggesting that responsibility judgments are likely a combination of production-style reasoning and counterfactual reasoning.  Further evidence came from analyses performed on individual participants, which revealed that he Ensemble model explained more participants’ data than any other model.  These findings were subsequently supported by Experiment 2a and Experiment 2b, which replicated the results when additional force information was shown to the participants, and by Experiment 3, which validated the model predictions with a broader range of stimuli.


Summary: Effort exerted by each member & counterfactual thinking play a crucial role in attributing responsibility for success or failure in collaborative tasks. This study suggests that higher effort leads to more responsibility for success, while lower effort leads to more responsibility for failure.

Saturday, April 22, 2023

A Psychologist Explains How AI and Algorithms Are Changing Our Lives

Danny Lewis
The Wall Street Journal
Originally posted 21 MAR 23

In an age of ChatGPT, computer algorithms and artificial intelligence are increasingly embedded in our lives, choosing the content we’re shown online, suggesting the music we hear and answering our questions.

These algorithms may be changing our world and behavior in ways we don’t fully understand, says psychologist and behavioral scientist Gerd Gigerenzer, the director of the Harding Center for Risk Literacy at the University of Potsdam in Germany. Previously director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development, he has conducted research over decades that has helped shape understanding of how people make choices when faced with uncertainty. 

In his latest book, “How to Stay Smart in a Smart World,” Dr. Gigerenzer looks at how algorithms are shaping our future—and why it is important to remember they aren’t human. He spoke with the Journal for The Future of Everything podcast.

The term algorithm is thrown around so much these days. What are we talking about when we talk about algorithms?

It is a huge thing, and therefore it is important to distinguish what we are talking about. One of the insights in my research at the Max Planck Institute is that if you have a situation that is stable and well defined, then complex algorithms such as deep neural networks are certainly better than human performance. Examples are [the games] chess and Go, which are stable. But if you have a problem that is not stable—for instance, you want to predict a virus, like a coronavirus—then keep your hands off complex algorithms. [Dealing with] the uncertainty—that is more how the human mind works, to identify the one or two important cues and ignore the rest. In that type of ill-defined problem, complex algorithms don’t work well. I call this the “stable world principle,” and it helps you as a first clue about what AI can do. It also tells you that, in order to get the most out of AI, we have to make the world more predictable.

So after all these decades of computer science, are algorithms really just still calculators at the end of the day, running more and more complex equations?

What else would they be? A deep neural network has many, many layers, but they are still calculating machines. They can do much more than ever before with the help of video technology. They can paint, they can construct text. But that doesn’t mean that they understand text in the sense humans do.

Friday, April 21, 2023

Moral Shock

Stockdale, K. (2022).
Journal of the American Philosophical
Association, 8(3), 496-511.
doi:10.1017/apa.2021.15

Abstract

This paper defends an account of moral shock as an emotional response to intensely bewildering events that are also of moral significance. This theory stands in contrast to the common view that shock is a form of intense surprise. On the standard model of surprise, surprise is an emotional response to events that violated one's expectations. But I show that we can be morally shocked by events that confirm our expectations. What makes an event shocking is not that it violated one's expectations, but that the content of the event is intensely bewildering (and bewildering events are often, but not always, contrary to our expectations). What causes moral shock is, I argue, our lack of emotional preparedness for the event. And I show that, despite the relative lack of attention to shock in the philosophical literature, the emotion is significant to moral, social, and political life.

Conclusion

I have argued that moral shock is an emotional response to intensely bewildering events that are also of moral significance. Although shock is typically considered to be an intense form of surprise, where surprise is an emotional response to events that violate our expectations or are at least unexpected, I have argued that the contrary-expectation model is found wanting. For it seems that we are sometimes shocked by the immoral actions of others even when we expected them to behave in just the ways that they did. What is shocking is what is intensely bewildering—and the bewildering often, but not always, tracks the unexpected. The extent to which such events shock us is, I have argued, a function of our felt readiness to experience them. When we are not emotionally prepared for what we expect to occur, we might find ourselves in the grip of moral shock.

There is much more to be said about the emotion of moral shock and its significance to moral, social, and political life. This paper is meant to be a starting point rather than a decisive take on an undertheorized emotion. But by understanding more deeply the nature and effects of moral shock, we can gain richer insight into a common response to immoral actions; what prevents us from responding well in the moment; and how the brief and fleeting, yet intense events in our lives affect agency, responsibility, and memory. We might also be able to make better sense of the bewildering social and political events that shock us and those to which we have become emotionally resilient.


This appears to be a philosophical explication of "Moral Injury", as can be found multiple places on this web site.

Thursday, April 20, 2023

Toward Parsimony in Bias Research: A Proposed Common Framework of Belief-Consistent Information Processing for a Set of Biases

Oeberst, A., & Imhoff, R. (2023).
Perspectives on Psychological Science, 0(0).
https://doi.org/10.1177/17456916221148147

Abstract

One of the essential insights from psychological research is that people’s information processing is often biased. By now, a number of different biases have been identified and empirically demonstrated. Unfortunately, however, these biases have often been examined in separate lines of research, thereby precluding the recognition of shared principles. Here we argue that several—so far mostly unrelated—biases (e.g., bias blind spot, hostile media bias, egocentric/ethnocentric bias, outcome bias) can be traced back to the combination of a fundamental prior belief and humans’ tendency toward belief-consistent information processing. What varies between different biases is essentially the specific belief that guides information processing. More importantly, we propose that different biases even share the same underlying belief and differ only in the specific outcome of information processing that is assessed (i.e., the dependent variable), thus tapping into different manifestations of the same latent information processing. In other words, we propose for discussion a model that suffices to explain several different biases. We thereby suggest a more parsimonious approach compared with current theoretical explanations of these biases. We also generate novel hypotheses that follow directly from the integrative nature of our perspective.

Conclusion

There have been many prior attempts of synthesizing and integrating research on (parts of) biased information processing (e.g., Birch & Bloom, 2004; Evans, 1989; Fiedler, 1996, 2000; Gawronski & Strack, 2012; Gilovich, 1991; Griffin & Ross, 1991; Hilbert, 2012; Klayman & Ha, 1987; Kruglanski et al., 2012; Kunda, 1990; Lord & Taylor, 2009; Pronin et al., 2004; Pyszczynski & Greenberg, 1987; Sanbonmatsu et al., 1998; Shermer, 1997; Skov & Sherman, 1986; Trope & Liberman, 1996). Some of them have made similar or overlapping arguments or implicitly made similar assumptions to the ones outlined here and thus resonate with our reasoning. In none of them, however, have we found the same line of thought and its consequences explicated.

To put it briefly, theoretical advancements necessitate integration and parsimony (the integrative potential), as well as novel ideas and hypotheses (the generative potential). We believe that the proposed framework for understanding bias as presented in this article has merits in both of these aspects. We hope to instigate discussion as well as empirical scrutiny with the ultimate goal of identifying common principles across several disparate research strands that have heretofore sought to understand human biases.


This article proposes a common framework for studying biases in information processing, aiming for parsimony in bias research. The framework suggests that biases can be understood as a result of belief-consistent information processing, and highlights the importance of considering both cognitive and motivational factors.

Wednesday, April 19, 2023

Meaning in Life in AI Ethics—Some Trends and Perspectives

Nyholm, S., Rüther, M. 
Philos. Technol. 36, 20 (2023). 
https://doi.org/10.1007/s13347-023-00620-z

Abstract

In this paper, we discuss the relation between recent philosophical discussions about meaning in life (from authors like Susan Wolf, Thaddeus Metz, and others) and the ethics of artificial intelligence (AI). Our goal is twofold, namely, to argue that considering the axiological category of meaningfulness can enrich AI ethics, on the one hand, and to portray and evaluate the small, but growing literature that already exists on the relation between meaning in life and AI ethics, on the other hand. We start out our review by clarifying the basic assumptions of the meaning in life discourse and how it understands the term ‘meaningfulness’. After that, we offer five general arguments for relating philosophical questions about meaning in life to questions about the role of AI in human life. For example, we formulate a worry about a possible meaningfulness gap related to AI on analogy with the idea of responsibility gaps created by AI, a prominent topic within the AI ethics literature. We then consider three specific types of contributions that have been made in the AI ethics literature so far: contributions related to self-development, the future of work, and relationships. As we discuss those three topics, we highlight what has already been done, but we also point out gaps in the existing literature. We end with an outlook regarding where we think the discussion of this topic should go next.

(cut)

Meaning in Life in AI Ethics—Summary and Outlook

We have tried to show at least three things in this paper. First, we have noted that there is a growing debate on meaningfulness in some sub-areas of AI ethics, and particularly in relation to meaningful self-development, meaningful work, and meaningful relationships. Second, we have argued that this should come as no surprise. Philosophers working on meaning in life share the assumption that meaning in life is a partly autonomous value concept, which deserves ethical consideration. Moreover, as we argued in Section 4 above, there are at least five significant general arguments that can be formulated in support of the claim that questions of meaningfulness should play a prominent role in ethical discussions of newly emerging AI technologies. Third, we have also stressed that, although there is already some debate about AI and meaning in life, it does not mean that there is no further work to do. Rather, we think that the area of AI and its potential impacts on meaningfulness in life is a fruitful topic that philosophers have only begun to explore, where there is much room for additional in-depth discussions.

We will now close our discussion with three general remarks. The first is led by the observation that some of the main ethicists in the field have yet to explore their underlying meaning theory and its normative claims in a more nuanced way. This is not only a shortcoming on its own, but has some effect on how the field approaches issues. Are agency extension or moral abilities important for meaningful self-development? Should achievement gaps really play a central role in the discussion of meaningful work? And what about the many different aspects of meaningful relationships? These are only a few questions which can shed light on the presupposed underlying normative claims that are involved in the field. Here, further exploration at deeper levels could help us to see which things are important and which are not more clear, and finally in which directions the field should develop.

Tuesday, April 18, 2023

We need an AI rights movement

Jacy Reese Anthis
The Hill
Originally posted 23 MAR 23

New artificial intelligence technologies like the recent release of GPT-4 have stunned even the most optimistic researchers. Language transformer models like this and Bing AI are capable of conversations that feel like talking to a human, and image diffusion models such as Midjourney and Stable Diffusion produce what looks like better digital art than the vast majority of us can produce. 

It’s only natural, after having grown up with AI in science fiction, to wonder what’s really going on inside the chatbot’s head. Supporters and critics alike have ruthlessly probed their capabilities with countless examples of genius and idiocy. Yet seemingly every public intellectual has a confident opinion on what the models can and can’t do, such as claims from Gary Marcus, Judea Pearl, Noam Chomsky, and others that the models lack causal understanding.

But thanks to tools like ChatGPT, which implements GPT-4, being publicly accessible, we can put these claims to the test. If you ask ChatGPT why an apple falls, it gives a reasonable explanation of gravity. You can even ask ChatGPT what happens to an apple released from the hand if there is no gravity, and it correctly tells you the apple will stay in place. 

Despite these advances, there seems to be consensus at least that these models are not sentient. They have no inner life, no happiness or suffering, at least no more than an insect. 

But it may not be long before they do, and our concepts of language, understanding, agency, and sentience are deeply insufficient to assess the AI systems that are becoming digital minds integrated into society with the capacity to be our friends, coworkers, and — perhaps one day — to be sentient beings with rights and personhood. 

AIs are no longer mere tools like smartphones and electric cars, and we cannot treat them in the same way as mindless technologies. A new dawn is breaking. 

This is just one of many reasons why we need to build a new field of digital minds research and an AI rights movement to ensure that, if the minds we create are sentient, they have their rights protected. Scientists have long proposed the Turing test, in which human judges try to distinguish an AI from a human by speaking to it. But digital minds may be too strange for this approach to tell us what we need to know. 

Monday, April 17, 2023

Generalized Morality Culturally Evolves as an Adaptive Heuristic in Large Social Networks

Jackson, J. C., Halberstadt, J., et al.
(2023, March 22).

Abstract

Why do people assume that a generous person should also be honest? Why can a single criminal conviction destroy someone’s moral reputation? And why do we even use words like “moral” and “immoral”? We explore these questions with a new model of how people perceive moral character. According to this model, people can vary in the extent that they perceive moral character as “localized” (varying across many contextually embedded dimensions) vs. “generalized” (varying along a single dimension from morally bad to morally good). This variation might be at least partly the product of cultural evolutionary adaptations to predicting cooperation in different kinds of social networks. As networks grow larger and more complex, perceptions of generalized morality are increasingly valuable for predicting cooperation during partner selection, especially in novel contexts. Our studies show that social network size correlates with perceptions of generalized morality in US and international samples (Study 1), and that East African hunter-gatherers with greater exposure outside their local region perceive morality as more generalized compared to those who have remained in their local region (Study 2). We support the adaptive value of generalized morality in large and unfamiliar social networks with an agent-based model (Study 3), and experimentally show that generalized morality outperforms localized morality when people predict cooperation in contexts where they have incomplete information about previous partner behavior (Study 4). Our final study shows that perceptions of morality have become more generalized over the last 200 years of English-language history, which suggests that it may be co-evolving with rising social complexity and anonymity in the English-speaking world (Study 5). We also present several supplemental studies which extend our findings. We close by discussing the implications of this theory for the cultural evolution of political systems, religion, and taxonomical theories of morality.

General Discussion

The word“moral” has taken a strange journey over the last several centuries. The word did not yet exist when Plato and Aristotle composed their theories of virtue. It was only when Cicero translated Aristotle’s Nicomachean Ethics that he coined the term “moralis” as the Latin translation of Aristotle’s “Ä“thikós”(Online Etymology Dictionary, n.d.).It is an ironic slight to Aristotle—who favored concrete particulars in lieu of abstract forms—that the word has become increasingly abstract and all-encompassing throughout its lexical evolution, with a meaning that now approaches Plato’s “form of the good.” We doubt that this semantic drift isa coincidence.

Instead, it may signify a cultural evolutionary shift in people’s perceptions of moral character as increasingly generalized as people inhabit increasingly larger and more unfamiliar social networks. Here we support this perspective with five studies. Studies 1-2 find that social network size correlates with the prevalence of generalized morality. Studies 1a-b explicitly tie beliefs in generalized morality to social network size with large surveys.  Study 2 conceptually replicates this finding in a Hadza hunter-gatherer camp, showing that Hadza hunter-gatherers with more external exposure perceive their campmates using more generalized morality. Studies 3-4 show that generalized morality can be adaptive for predicting cooperation in large and unfamiliar networks. Study 3 is an agent-based model which shows that, given plausible assumptions, generalized morality becomes increasingly valuable as social networks grow larger and less familiar. Study 4 is an experiment which shows that generalized morality is particularly valuable when people interact with unfamiliar partners in novel situations. Finally, Study 5 shows that generalized morality has risen over English-language history, such that words for moral attributes (e.g., fair, loyal, caring) have become more semantically generalizable over the last two hundred years of human history.