Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, April 27, 2023

A dark side of hope: Understanding why investors cling onto losing stocks

Luo, S. X., et al. (2022).
Journal of Behavioral Decision Making.
https://doi.org/10.1002/bdm.2304

Abstract

Investors are often inclined to keep losing stocks too long, despite this being irrational. This phenomenon is part of the disposition effect (“people ride losers too long, and sell winners too soon”). The current research examines the role of hope as a potential explanation of why people ride losers too long. Three correlational studies (1A, 1B, and 2) find that people's trait hope is positively associated with their inclination to keep losing stocks, regardless of their risk-seeking tendency (Study 2). Further, three experimental studies (3, 4, and 5) reveal that people are inclined to hold on to losing (vs. not-losing) stocks because of their hope to break even and not because of their hope to gain. Studies 4 and 5 provide process evidence confirming the role of hope and indicate potential interventions to decrease people's tendency to keep losing stocks by reducing the hope. The findings contribute to the limited empirical literature that has investigated how emotions influence the disposition effect by providing empirical evidence for the role of hope. Moreover, the findings add to the literature of hope by revealing its role in financial decision-making and show a “dark side” of this positive emotion.

General Discussion

Investors are reluctant to sell their losing stocks, which is part of the well-known disposition effect (Shefrin & Statman, 1985). Why would investors do so, especially when it is a suboptimal financial decision? In a series of studies, we found consistent support for the idea that the emotion of hope explains at least partly why people hold on to their losing stocks. Studies 1A and 1B revealed that an increase in people's trait hope (measured by two trait hope scales) increases their inclination to keep losing stocks. Study 2 further confirmed that the trait hope is positively associated with the inclination to keep losing stocks, controlling for the influence of the risk-taking tendency of real-world investors. In Study 3, we developed a simple and effective experimental design to examine whether losing influences hope and people's tendency to keep stocks in the same way. In addition, it differentiated between what people hope for: to break even versus to gain. The results indicate that when one's stocks are losing, compared with when they are not, people experience a stronger hope to break even and an inclination to keep, but not a stronger hope to gain. In addition, Study 3 found that losing (vs. not losing) leads to a stronger inclination to keep stocks.

Moreover, the hope to break even (but not the hope to gain) mediated the effect of losing on the inclination to keep. Study 4 found that reducing people's hope to break even decreases their inclination to keep their losing stocks to the same level as when their stocks did not decrease in price. Study 5 found that people tend to have a lower hope to break even when holding stocks on behalf of others (vs. for themselves) and thus tend to be less likely to keep the losing stocks. Studies 4 and 5 provided process evidence that reducing hope attenuates the inclination to keep, suggesting two possible interventions focusing on the possibility or the desire feature of hope. In a series of studies, we found that people cling to losing stocks because they hope to break even, and reducing this hope decreases their inclination to keep the losing stocks.

Wednesday, April 26, 2023

A Prosociality Paradox: How Miscalibrated Social Cognition Creates a Misplaced Barrier to Prosocial Action

Epley, N., Kumar, A., Dungan, J., &
Echelbarger, M. (2023).
Current Directions in Psychological Science,
32(1), 33–41. 
https://doi.org/10.1177/09637214221128016

Abstract

Behaving prosocially can increase well-being among both those performing a prosocial act and those receiving it, and yet people may experience some reluctance to engage in direct prosocial actions. We review emerging evidence suggesting that miscalibrated social cognition may create a psychological barrier that keeps people from behaving as prosocially as would be optimal for both their own and others’ well-being. Across a variety of interpersonal behaviors, those performing prosocial actions tend to underestimate how positively their recipients will respond. These miscalibrated expectations stem partly from a divergence in perspectives, such that prosocial actors attend relatively more to the competence of their actions, whereas recipients attend relatively more to the warmth conveyed. Failing to fully appreciate the positive impact of prosociality on others may keep people from behaving more prosocially in their daily lives, to the detriment of both their own and others’ well-being.

Undervaluing Prosociality

It may not be accidental that William James (1896/1920) named “the craving to be appreciated” as “the deepest principle in human nature” only after receiving a gift of appreciation that he described as “the first time anyone ever treated me so kindly.” “I now perceive one immense omission in my [Principles of Psychology],” he wrote regarding the importance of appreciation. “I left it out altogether . . . because I had never had it gratified till now” (p. 33).

James does not seem to be unique in failing to recognize the positive impact that appreciation can have on recipients. In one experiment (Kumar & Epley, 2018, Experiment 1), MBA students thought of a person they felt grateful to, but to whom they had not yet expressed their appreciation. The students, whom we refer to as expressers, wrote a gratitude letter to this person and then reported how they expected the recipient would feel upon receiving it: how surprised the recipient would be to receive the letter, how surprised the recipient would be about the content, how negative or positive the recipient would feel, and how awkward the recipient would feel. Expressers willing to do so then provided recipients’ email addresses so the recipients could be contacted to report how they actually felt receiving their letter. Although expressers recognized that the recipients would feel positive, they did not recognize just how positive the recipients would feel: Expressers underestimated how surprised the recipients would be to receive the letter, how surprised the recipients would be by its content, and how positive the recipients would feel, whereas they overestimated how awkward the recipients would feel. Table 1 shows the robustness of these results across an additional published experiment and 17 subsequent replications (see Fig. 1 for overall results; full details are available at OSF: osf.io/7wndj/). Expressing gratitude has a reliably more positive impact on recipients than expressers expect.

Conclusion

How much people genuinely care about others has been debated for centuries. In summarizing the purely selfish viewpoint endorsed by another author, Thomas Jefferson (1854/2011) wrote, “I gather from his other works that he adopts the principle of Hobbes, that justice is founded in contract solely, and does not result from the construction of man.” Jefferson felt differently: “I believe, on the contrary, that it is instinct, and innate, that the moral sense is as much a part of our constitution as that of feeling, seeing, or hearing . . . that every human mind feels pleasure in doing good to another” (p. 39).

Such debates will never be settled by simply observing human behavior because prosociality is not simply produced by automatic “instinct” or “innate” disposition, but rather can be produced by complicated social cognition (Miller, 1999). Jefferson’s belief that people feel “pleasure in doing good to another” is now well supported by empirical evidence. However, the evidence we reviewed here suggests that people may avoid experiencing this pleasure not because they do not want to be good to others, but because they underestimate just how positively others will react to the good being done to them.

Tuesday, April 25, 2023

Responsible Agency and the Importance of Moral Audience

Jefferson, A., Sifferd, K. 
Ethic Theory Moral Prac (2023).
https://doi.org/10.1007/s10677-023-10385-1

Abstract

Ecological accounts of responsible agency claim that moral feedback is essential to the reasons-responsiveness of agents. In this paper, we discuss McGeer’s scaffolded reasons-responsiveness account in the light of two concerns. The first is that some agents may be less attuned to feedback from their social environment but are nevertheless morally responsible agents – for example, autistic people. The second is that moral audiences can actually work to undermine reasons-responsiveness if they espouse the wrong values. We argue that McGeer’s account can be modified to handle both problems. Once we understand the specific roles that moral feedback plays for recognizing and acting on moral reasons, we can see that autistics frequently do rely on such feedback, although it often needs to be more explicit. Furthermore, although McGeer is correct to highlight the importance of moral feedback, audience sensitivity is not all that matters to reasons-responsiveness; it needs to be tempered by a consistent application of moral rules. Agents also need to make sure that they choose their moral audiences carefully, paying special attention to receiving feedback from audiences which may be adversely affected by their actions.

Conclusions

In this paper we raised two challenges to McGeer’s scaffolded reasons-responsiveness account: agents who are less attuned to social feedback such as autistics, and corrupting moral audiences. We found that, once we parsed the two roles that feedback from a moral audience play, autistics provide reasons to revise the scaffolded reasons-responsiveness account. We argued that autistic persons, like neurotypicals, wish to justify their behaviour to a moral audience and rely on their moral audience for feedback. However, autistic persons may need more explicit feedback when it comes to effects their behaviour has on others. They also compensate for difficulties they have in receiving information from the moral audience by justifying action through appeal to moral rules. This shows that McGeer’s view of moral agency needs to include observance of moral rules as a way of reducing reliance on audience feedback. We suspect that McGeer would approve of this proposal, as she mentions that an instance of blame can lead to vocal protest by the target, and a possible renegotiation of norms and rules for what constitutes acceptable behaviour (2019). Consideration of corrupting audiences highlights a different problem from that of resisting blame and renegotiating norms. It draws attention to cases where individual agents must try to go beyond what is accepted in their moral environment, a significant challenge for social beings who rely strongly on moral audiences in developing and calibrating their moral reasons-responsiveness. Resistance to a moral audience requires the capacity to evaluate the action differently; often this will be with reference to a moral rule or principle.

For both neurotypical and autistic individuals, consistent application of moral rules or principles can reinforce and bring back to mind important moral commitments when we are led astray by our own desires or specific (im)moral audiences. But moral audiences still play a crucial role to developing and maintaining reasons-responsiveness. First, they are essential to the development and maintenance of all agents’ moral sensitivity. Second, they can provide an important moral corrective where people may have moral blindspots, especially when they provide insights into ways in which a person has fallen short morally by not taking on board reasons that are not obvious to them. Often, these can be reasons which pertain to the respectful treatment of others who are in some important way different from that person.


In sum: Be responsible and accountable in your actions, as your moral audience is always watching. Doing the right thing matters not just for your reputation, but for the greater good. #ResponsibleAgency #MoralAudience

Monday, April 24, 2023

ChatGPT in the Clinic? Medical AI Needs Ethicists

Emma Bedor Hiland
The Hastings Center
Originally published by 10 MAR 23

Concerns about the role of artificial intelligence in our lives, particularly if it will help us or harm us, improve our health and well-being or work to our detriment, are far from new. Whether 2001: A Space Odyssey’s HAL colored our earliest perceptions of AI, or the much more recent M3GAN, these questions are not unique to the contemporary era, as even the ancient Greeks wondered what it would be like to live alongside machines.

Unlike ancient times, today AI’s presence in health and medicine is not only accepted, it is also normative. Some of us rely upon FitBits or phone apps to track our daily steps and prompt us when to move or walk more throughout our day. Others utilize chatbots available via apps or online platforms that claim to improve user mental health, offering meditation or cognitive behavioral therapy. Medical professionals are also open to working with AI, particularly when it improves patient outcomes. Now the availability of sophisticated chatbots powered by programs such as OpenAI’s ChatGPT have brought us closer to the possibility of AI becoming a primary source in providing medical diagnoses and treatment plans.

Excitement about ChatGPT was the subject of much media attention in late 2022 and early 2023. Many in the health and medical fields were also eager to assess the AI’s abilities and applicability to their work. One study found ChatGPT adept at providing accurate diagnoses and triage recommendations. Others in medicine were quick to jump on its ability to complete administrative paperwork on their behalf. Other research found that ChatGPT reached, or came close to reaching, the passing threshold for United States Medical Licensing Exam.

Yet the public at large is not as excited about an AI-dominated medical future. A study from the Pew Research Center found that most Americans are “uncomfortable” with the prospect of AI-provided medical care. The data also showed widespread agreement that AI will negatively affect patient-provider relationships, and that the public is concerned health care providers will adopt AI technologies too quickly, before they fully understanding the risks of doing so.


In sum: As AI is increasingly used in healthcare, this article argues that there is a need for ethical considerations and expertise to ensure that these systems are designed and used in a responsible and beneficial manner. Ethicists can play a vital role in evaluating and addressing the ethical implications of medical AI, particularly in areas such as bias, transparency, and privacy.

Sunday, April 23, 2023

Produced and counterfactual effort contribute to responsibility attributions in collaborative tasks

Xiang, Y., Landy, J., et al. (2023, March 8). 
PsyArXiv
https://doi.org/10.31234/osf.io/jc3hk

Abstract

How do people judge responsibility in collaborative tasks? Past work has proposed a number of metrics that people may use to attribute blame and credit to others, such as effort, competence, and force. Some theories consider only the produced effort or force (individuals are more responsible if they produce more effort or force), whereas others consider counterfactuals (individuals are more responsible if some alternative behavior on their or their collaborator's part could have altered the outcome). Across four experiments (N = 717), we found that participants’ judgments are best described by a model that considers both produced and counterfactual effort. This finding generalized to an independent validation data set (N = 99). Our results thus support a dual-factor theory of responsibility attribution in collaborative tasks.

General discussion

Responsibility for the outcomes of collaborations is often distributed unevenly. For example, the lead author on a project may get the bulk of the credit for a scientific discovery, the head of a company may  shoulder the blame for a failed product, and the lazier of two friends may get the greater share of blame  for failing to lift a couch.  However, past work has provided conflicting accounts of the computations that drive responsibility attributions in collaborative tasks.  Here, we compared each of these accounts against human responsibility attributions in a simple collaborative task where two agents attempted to lift a box together.  We contrasted seven models that predict responsibility judgments based on metrics proposed in past work, comprising three production-style models (Force, Strength, Effort), three counterfactual-style models (Focal-agent-only, Non-focal-agent-only, Both-agent), and one Ensemble model that combines the best-fitting production- and counterfactual-style models.  Experiment 1a and Experiment 1b showed that theEffort model and the Both-agent counterfactual model capture the data best among the production-style models and the counterfactual-style models, respectively.  However, neither provided a fully adequate fit on their own.  We then showed that predictions derived from the average of these two models (i.e., the Ensemble model) outperform all other models, suggesting that responsibility judgments are likely a combination of production-style reasoning and counterfactual reasoning.  Further evidence came from analyses performed on individual participants, which revealed that he Ensemble model explained more participants’ data than any other model.  These findings were subsequently supported by Experiment 2a and Experiment 2b, which replicated the results when additional force information was shown to the participants, and by Experiment 3, which validated the model predictions with a broader range of stimuli.


Summary: Effort exerted by each member & counterfactual thinking play a crucial role in attributing responsibility for success or failure in collaborative tasks. This study suggests that higher effort leads to more responsibility for success, while lower effort leads to more responsibility for failure.

Saturday, April 22, 2023

A Psychologist Explains How AI and Algorithms Are Changing Our Lives

Danny Lewis
The Wall Street Journal
Originally posted 21 MAR 23

In an age of ChatGPT, computer algorithms and artificial intelligence are increasingly embedded in our lives, choosing the content we’re shown online, suggesting the music we hear and answering our questions.

These algorithms may be changing our world and behavior in ways we don’t fully understand, says psychologist and behavioral scientist Gerd Gigerenzer, the director of the Harding Center for Risk Literacy at the University of Potsdam in Germany. Previously director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development, he has conducted research over decades that has helped shape understanding of how people make choices when faced with uncertainty. 

In his latest book, “How to Stay Smart in a Smart World,” Dr. Gigerenzer looks at how algorithms are shaping our future—and why it is important to remember they aren’t human. He spoke with the Journal for The Future of Everything podcast.

The term algorithm is thrown around so much these days. What are we talking about when we talk about algorithms?

It is a huge thing, and therefore it is important to distinguish what we are talking about. One of the insights in my research at the Max Planck Institute is that if you have a situation that is stable and well defined, then complex algorithms such as deep neural networks are certainly better than human performance. Examples are [the games] chess and Go, which are stable. But if you have a problem that is not stable—for instance, you want to predict a virus, like a coronavirus—then keep your hands off complex algorithms. [Dealing with] the uncertainty—that is more how the human mind works, to identify the one or two important cues and ignore the rest. In that type of ill-defined problem, complex algorithms don’t work well. I call this the “stable world principle,” and it helps you as a first clue about what AI can do. It also tells you that, in order to get the most out of AI, we have to make the world more predictable.

So after all these decades of computer science, are algorithms really just still calculators at the end of the day, running more and more complex equations?

What else would they be? A deep neural network has many, many layers, but they are still calculating machines. They can do much more than ever before with the help of video technology. They can paint, they can construct text. But that doesn’t mean that they understand text in the sense humans do.

Friday, April 21, 2023

Moral Shock

Stockdale, K. (2022).
Journal of the American Philosophical
Association, 8(3), 496-511.
doi:10.1017/apa.2021.15

Abstract

This paper defends an account of moral shock as an emotional response to intensely bewildering events that are also of moral significance. This theory stands in contrast to the common view that shock is a form of intense surprise. On the standard model of surprise, surprise is an emotional response to events that violated one's expectations. But I show that we can be morally shocked by events that confirm our expectations. What makes an event shocking is not that it violated one's expectations, but that the content of the event is intensely bewildering (and bewildering events are often, but not always, contrary to our expectations). What causes moral shock is, I argue, our lack of emotional preparedness for the event. And I show that, despite the relative lack of attention to shock in the philosophical literature, the emotion is significant to moral, social, and political life.

Conclusion

I have argued that moral shock is an emotional response to intensely bewildering events that are also of moral significance. Although shock is typically considered to be an intense form of surprise, where surprise is an emotional response to events that violate our expectations or are at least unexpected, I have argued that the contrary-expectation model is found wanting. For it seems that we are sometimes shocked by the immoral actions of others even when we expected them to behave in just the ways that they did. What is shocking is what is intensely bewildering—and the bewildering often, but not always, tracks the unexpected. The extent to which such events shock us is, I have argued, a function of our felt readiness to experience them. When we are not emotionally prepared for what we expect to occur, we might find ourselves in the grip of moral shock.

There is much more to be said about the emotion of moral shock and its significance to moral, social, and political life. This paper is meant to be a starting point rather than a decisive take on an undertheorized emotion. But by understanding more deeply the nature and effects of moral shock, we can gain richer insight into a common response to immoral actions; what prevents us from responding well in the moment; and how the brief and fleeting, yet intense events in our lives affect agency, responsibility, and memory. We might also be able to make better sense of the bewildering social and political events that shock us and those to which we have become emotionally resilient.


This appears to be a philosophical explication of "Moral Injury", as can be found multiple places on this web site.

Thursday, April 20, 2023

Toward Parsimony in Bias Research: A Proposed Common Framework of Belief-Consistent Information Processing for a Set of Biases

Oeberst, A., & Imhoff, R. (2023).
Perspectives on Psychological Science, 0(0).
https://doi.org/10.1177/17456916221148147

Abstract

One of the essential insights from psychological research is that people’s information processing is often biased. By now, a number of different biases have been identified and empirically demonstrated. Unfortunately, however, these biases have often been examined in separate lines of research, thereby precluding the recognition of shared principles. Here we argue that several—so far mostly unrelated—biases (e.g., bias blind spot, hostile media bias, egocentric/ethnocentric bias, outcome bias) can be traced back to the combination of a fundamental prior belief and humans’ tendency toward belief-consistent information processing. What varies between different biases is essentially the specific belief that guides information processing. More importantly, we propose that different biases even share the same underlying belief and differ only in the specific outcome of information processing that is assessed (i.e., the dependent variable), thus tapping into different manifestations of the same latent information processing. In other words, we propose for discussion a model that suffices to explain several different biases. We thereby suggest a more parsimonious approach compared with current theoretical explanations of these biases. We also generate novel hypotheses that follow directly from the integrative nature of our perspective.

Conclusion

There have been many prior attempts of synthesizing and integrating research on (parts of) biased information processing (e.g., Birch & Bloom, 2004; Evans, 1989; Fiedler, 1996, 2000; Gawronski & Strack, 2012; Gilovich, 1991; Griffin & Ross, 1991; Hilbert, 2012; Klayman & Ha, 1987; Kruglanski et al., 2012; Kunda, 1990; Lord & Taylor, 2009; Pronin et al., 2004; Pyszczynski & Greenberg, 1987; Sanbonmatsu et al., 1998; Shermer, 1997; Skov & Sherman, 1986; Trope & Liberman, 1996). Some of them have made similar or overlapping arguments or implicitly made similar assumptions to the ones outlined here and thus resonate with our reasoning. In none of them, however, have we found the same line of thought and its consequences explicated.

To put it briefly, theoretical advancements necessitate integration and parsimony (the integrative potential), as well as novel ideas and hypotheses (the generative potential). We believe that the proposed framework for understanding bias as presented in this article has merits in both of these aspects. We hope to instigate discussion as well as empirical scrutiny with the ultimate goal of identifying common principles across several disparate research strands that have heretofore sought to understand human biases.


This article proposes a common framework for studying biases in information processing, aiming for parsimony in bias research. The framework suggests that biases can be understood as a result of belief-consistent information processing, and highlights the importance of considering both cognitive and motivational factors.

Wednesday, April 19, 2023

Meaning in Life in AI Ethics—Some Trends and Perspectives

Nyholm, S., Rüther, M. 
Philos. Technol. 36, 20 (2023). 
https://doi.org/10.1007/s13347-023-00620-z

Abstract

In this paper, we discuss the relation between recent philosophical discussions about meaning in life (from authors like Susan Wolf, Thaddeus Metz, and others) and the ethics of artificial intelligence (AI). Our goal is twofold, namely, to argue that considering the axiological category of meaningfulness can enrich AI ethics, on the one hand, and to portray and evaluate the small, but growing literature that already exists on the relation between meaning in life and AI ethics, on the other hand. We start out our review by clarifying the basic assumptions of the meaning in life discourse and how it understands the term ‘meaningfulness’. After that, we offer five general arguments for relating philosophical questions about meaning in life to questions about the role of AI in human life. For example, we formulate a worry about a possible meaningfulness gap related to AI on analogy with the idea of responsibility gaps created by AI, a prominent topic within the AI ethics literature. We then consider three specific types of contributions that have been made in the AI ethics literature so far: contributions related to self-development, the future of work, and relationships. As we discuss those three topics, we highlight what has already been done, but we also point out gaps in the existing literature. We end with an outlook regarding where we think the discussion of this topic should go next.

(cut)

Meaning in Life in AI Ethics—Summary and Outlook

We have tried to show at least three things in this paper. First, we have noted that there is a growing debate on meaningfulness in some sub-areas of AI ethics, and particularly in relation to meaningful self-development, meaningful work, and meaningful relationships. Second, we have argued that this should come as no surprise. Philosophers working on meaning in life share the assumption that meaning in life is a partly autonomous value concept, which deserves ethical consideration. Moreover, as we argued in Section 4 above, there are at least five significant general arguments that can be formulated in support of the claim that questions of meaningfulness should play a prominent role in ethical discussions of newly emerging AI technologies. Third, we have also stressed that, although there is already some debate about AI and meaning in life, it does not mean that there is no further work to do. Rather, we think that the area of AI and its potential impacts on meaningfulness in life is a fruitful topic that philosophers have only begun to explore, where there is much room for additional in-depth discussions.

We will now close our discussion with three general remarks. The first is led by the observation that some of the main ethicists in the field have yet to explore their underlying meaning theory and its normative claims in a more nuanced way. This is not only a shortcoming on its own, but has some effect on how the field approaches issues. Are agency extension or moral abilities important for meaningful self-development? Should achievement gaps really play a central role in the discussion of meaningful work? And what about the many different aspects of meaningful relationships? These are only a few questions which can shed light on the presupposed underlying normative claims that are involved in the field. Here, further exploration at deeper levels could help us to see which things are important and which are not more clear, and finally in which directions the field should develop.