Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, February 18, 2022

Measuring Impartial Beneficence: A Kantian Perspective on the Oxford Utilitarianism Scale

Mihailov, E. (2022). 
Rev.Phil.Psych.
https://doi.org/10.1007/s13164-021-00600-2

Abstract

To capture genuine utilitarian tendencies, (Kahane et al., Psychological Review 125:131, 2018) developed the Oxford Utilitarianism Scale (OUS) based on two subscales, which measure the commitment to impartial beneficence and the willingness to cause harm for the greater good. In this article, I argue that the impartial beneficence subscale, which breaks ground with previous research on utilitarian moral psychology, does not distinctively measure utilitarian moral judgment. I argue that Kantian ethics captures the all-encompassing impartial concern for the well-being of all human beings. The Oxford Utilitarianism Scale draws, in fact, a point of division that places Kantian and utilitarian theories on the same track. I suggest that the impartial beneficence subscale needs to be significantly revised in order to capture distinctively utilitarian judgments. Additionally, I propose that psychological research should focus on exploring multiple sources of the phenomenon of impartial beneficence without categorizing it as exclusively utilitarian.

Conclusion

The narrow focus of psychological research on sacrificial harm contributes to a Machiavellian picture of utilitarianism. By developing the Oxford Utilitarianism Scale, Kahane and his colleagues have shown how important it is for the study of moral judgment to include the inspiring ideal of impartial concern. However, this significant contribution goes beyond the utilitarian/deontological divide. We learn to divide moral theories depending on whether they are, at the root, either Kantian or utilitarian. Kant famously denounced lying, even if it would save someone’s life, whereas utilitarianism accepts transgression of moral rules if it maximizes the greater good. However, in regard to promoting the ideal of impartial beneficence, Kantian ethics and utilitarianism overlap because both theories contributed to the Enlightenment project of moral reform. In Kantian ethics, the very concepts of duty and moral community are interpreted in radically impartial and cosmopolitan terms. Thus, a fruitful area for future research opens on exploring the diverse psychological sources of impartial beneficence.

Thursday, February 17, 2022

Filling the gaps: Cognitive control as a critical lens for understanding mechanisms of value-based decision-making.

Frömer, R., & Shenhav, A. (2021, May 17). 
https://doi.org/10.31234/osf.io/dnvrj

Abstract

While often seeming to investigate rather different problems, research into value-based decision making and cognitive control have historically offered parallel insights into how people select thoughts and actions. While the former studies how people weigh costs and benefits to make a decision, the latter studies how they adjust information processing to achieve their goals. Recent work has highlighted ways in which decision-making research can inform our understanding of cognitive control. Here, we provide the complementary perspective: how cognitive control research has informed understanding of decision-making. We highlight three particular areas of research where this critical interchange has occurred: (1) how different types of goals shape the evaluation of choice options, (2) how people use control to adjust how they make their decisions, and (3) how people monitor decisions to inform adjustments to control at multiple levels and timescales. We show how adopting this alternate viewpoint offers new insight into the determinants of both decisions and control; provides alternative interpretations for common neuroeconomic findings; and generates fruitful directions for future research.

Highlights

•  We review how taking a cognitive control perspective provides novel insights into the mechanisms of value based choice.

•  We highlight three areas of research where this critical interchange has occurred:

      (1) how different types of goals shape the evaluation of choice options,

      (2) how people use control to adjust how they make their decisions, and

      (3) how people monitor decisions to inform adjustments to control at multiple levels and timescales.

From Exerting Control Beyond Our Current Choice

We have so far discussed choices the way they are typically studied:in isolation. However, we don’t make choices in a vacuum, and our current choices depend on previous choices we have made (Erev & Roth, 2014; Keung, Hagen, & Wilson, 2019; Talluri et al., 2020; 618Urai, Braun, & Donner, 2017; Urai, de Gee, Tsetsos, & Donner, 2019). One natural way in which choices influence each other is through learning about the options, where the evaluations of the outcome of one choice refines the expected value (incorporating range and probability) assigned to that option in future choices (Fontanesi, Gluth, et al., 2019; Fontanesi, Palminteri, et al., 2019; Miletic et al., 2021).  Here we focus on a different, complementary way, central to cognitive control research, where evaluations of the process of ongoing and past choices inform the process of future choices(Botvinick et al., 1999; Bugg, Jacoby, & Chanani, 2011; Verguts, Vassena, & Silvetti, 2015). In cognitive control research, these choice evaluations and their influence on subsequent adaptation are studied under the umbrella of performance monitoring (Carter et al., 1998; Ullsperger, Fischer, Nigbur, & Endrass, 2014). Unlike option-based learning, performance monitoring influences not only which options are chosen, but also how subsequent choices are made. It also informs higher order decisions about strategy and task selection(Fig. 6305A).

Wednesday, February 16, 2022

AI ethics in computational psychiatry: From the neuroscience of consciousness to the ethics of consciousness

Wiese, W. and Friston, K.J.
Behavioural Brain Research
Volume 420, 26 February 2022, 113704

Abstract

Methods used in artificial intelligence (AI) overlap with methods used in computational psychiatry (CP). Hence, considerations from AI ethics are also relevant to ethical discussions of CP. Ethical issues include, among others, fairness and data ownership and protection. Apart from this, morally relevant issues also include potential transformative effects of applications of AI—for instance, with respect to how we conceive of autonomy and privacy. Similarly, successful applications of CP may have transformative effects on how we categorise and classify mental disorders and mental health. Since many mental disorders go along with disturbed conscious experiences, it is desirable that successful applications of CP improve our understanding of disorders involving disruptions in conscious experience. Here, we discuss prospects and pitfalls of transformative effects that CP may have on our understanding of mental disorders. In particular, we examine the concern that even successful applications of CP may fail to take all aspects of disordered conscious experiences into account.


Highlights

•  Considerations from AI ethics are also relevant to the ethics of computational psychiatry.

•  Ethical issues include, among others, fairness and data ownership and protection.

•  They also include potential transformative effects.

•  Computational psychiatry may transform conceptions of mental disorders and health.

•  Disordered conscious experiences may pose a particular challenge.

From the Discussion

At present, we are far from having a formal account of conscious experience. As mentioned in the introduction, many empirical theories of consciousness make competing claims, and there is still much uncertainty about the neural mechanisms that underwrite ordinary conscious processes (let alone psychopathology). Hence, the suggestion to foster research on the computational correlates of disordered conscious experiences should not be regarded as an invitation to ignore subjective reports. The patient’s perspective will continue to be central for normatively assessing their experienced condition. Computational models offer constructs to better describe and understand elusive aspects of a disordered conscious experience, but the patient will remain the primary authority on whether they are suffering from their condition. 

Tuesday, February 15, 2022

How do people use ‘killing’, ‘letting die’ and related bioethical concepts? Contrasting descriptive and normative hypotheses

Rodríguez-Arias, D., et al., (2009)
Bioethics 34(5)
DOI:10.1111/bioe.12707

Abstract

Bioethicists involved in end-of-life debates routinely distinguish between ‘killing’ and ‘letting die’. Meanwhile, previous work in cognitive science has revealed that when people characterize behaviour as either actively ‘doing’ or passively ‘allowing’, they do so not purely on descriptive grounds, but also as a function of the behaviour’s perceived morality. In the present report, we extend this line of research by examining how medical students and professionals (N = 184) and laypeople (N = 122) describe physicians’ behaviour in end-of-life scenarios. We show that the distinction between ‘ending’ a patient’s life and ‘allowing’ it to end arises from morally motivated causal selection. That is, when a patient wishes to die, her illness is treated as the cause of death and the doctor is seen as merely allowing her life to end. In contrast, when a patient does not wish to die, the doctor’s behaviour is treated as the cause of death and, consequently, the doctor is described as ending the patient’s life. This effect emerged regardless of whether the doctor’s behaviour was omissive (as in withholding treatment) or commissive (as in applying a lethal injection). In other words, patient consent shapes causal selection in end-of-life situations, and in turn determines whether physicians are seen as ‘killing’ patients, or merely as ‘enabling’ their death.

From the Discussion

Across three  cases of  end-of-life  intervention, we find  convergent evidence  that moral  appraisals shape behavior description (Cushman et al., 2008) and causal selection (Alicke, 1992; Kominsky et al., 2015). Consistent  with  the  deontic  hypothesis,  physicians  who  behaved  according  to  patients’  wishes  were described as allowing the patient’s life to end. In contrast, physicians who disregarded the patient’s wishes were  described  as  ending the  patient’s  life.  Additionally,  patient  consent  appeared  to  inform  causal selection: The doctor was seen as the cause of death when disregarding the patient’s will; but the illness was seen as the cause of death when the doctor had obeyed the patient’s will.

Whether the physician’s behavior was omissive or commissive did not play a comparable role in behavior description or causal  selection. First, these  effects were weaker  than those of patient consent. Second,  while the  effects  of  consent  generalized to  medical  students  and  professionals,  the  effects of commission arose only among lay respondents. In other words, medical students and professionals treated patient consent as the sole basis for the doing/allowing distinction.  

Taken together, these  results confirm that  doing and  allowing serve a  fundamentally evaluative purpose (in  line with  the deontic  hypothesis,  and Cushman  et al.,  2008),  and only  secondarily serve  a descriptive purpose, if at all. 

Monday, February 14, 2022

Beauty Goes Down to the Core: Attractiveness Biases Moral Character Attributions

Klebl, C., Rhee, J.J., Greenaway, K.H. et al. 
J Nonverbal Behav (2021). 
https://doi.org/10.1007/s10919-021-00388-w

Abstract

Physical attractiveness is a heuristic that is often used as an indicator of desirable traits. In two studies (N = 1254), we tested whether facial attractiveness leads to a selective bias in attributing moral character—which is paramount in person perception—over non-moral traits. We argue that because people are motivated to assess socially important traits quickly, these may be the traits that are most strongly biased by physical attractiveness. In Study 1, we found that people attributed more moral traits to attractive than unattractive people, an effect that was stronger than the tendency to attribute positive non-moral traits to attractive (vs. unattractive) people. In Study 2, we conceptually replicated the findings while matching traits on perceived warmth. The findings suggest that the Beauty-is-Good stereotype particularly skews in favor of the attribution of moral traits. As such, physical attractiveness biases the perceptions of others even more fundamentally than previously understood.

From the Discussion

The present investigation advances the Beauty-is-Good stereotype literature. Our findings are consistent with extensive research showing that people attribute positive traits more strongly to attractive compared to unattractive individuals (Dion et al., 1972). Most significantly, the present studies add to the previous literature by providing evidence that attractiveness does not bias the attribution of positive traits uniformly. Attractiveness especially biases the attribution of moral traits compared to positive non-moral traits, constituting an update to the Beauty-is-Good stereotype. One possible explanation for this selective bias is that because people are particularly motivated to assess socially important traits—traits that help us quickly decide who our allies are (Goodwin et  al., 2014)—physical attractiveness selectively biases the attribution of those traits over socially less important traits. While in many instances, this may allow us to assess moral character quickly and accurately (cf. Ambady et al., 2000) and thus obtain valuable information about whether the target is a threat or ally, where morally relevant information is absent (such as during initial impression formation) this motivation to assess moral character may lead to an over reliance on heuristic cues. 

Sunday, February 13, 2022

Hit by the Virtual Trolley: When is Experimental Ethics Unethical?

Rueda, J. (2022).
ResearchGate.net

Abstract

The  trolley  problem  is  one  of  the  liveliest  research  frameworks  in experimental  ethics. In  the last  decade, social  neuroscience  and experimental  moral psychology  have  gone  beyond  the  studies  with  mere text-based  hypothetical  moral dilemmas. In this article, I present the rationale behind testing the actual behaviour in more realistic scenarios  through Virtual Reality and summarize the body of evidence raised by the experiments with virtual trolley scenarios. Then, I approach the argument of Ramirez and LaBarge (2020), who claim that the virtual simulation of the Footbridge version  of  the  trolley  dilemma  is  an  unethical  research  practice,  and  I  raise  some objections to it. Finally, I provide some reflections about the means and ends of trolley-like scenarios and other sacrificial dilemmas in experimental ethics.

(cut)

From Rethinking the Means and Ends of Trolleyology

The first response states that these studies have no normative relevance at all. A traditional objection to the trolley dilemma pointed to the artificiality of the scenario and its normative uselessness in translating to real contemporary problems (see, for instance, Midgley, cited in Edmonds, 2014, p. 100-101). We have already seen that this is not true. Indeed, the existence of real dilemmas that share structural similarities with hypothetical trolley scenarios makes it  practically useful to test our intuitions on them (Edmonds, 2014). Besides that, a more sophisticated objection claims that intuitive responses to the trolley problem have no ethical value because intuitions are quite unreliable. Cognitive science has frequently shown how fallible, illogical, biased, and irrational many of our intuitive preferences can be. In fact, moral intuitions in text-based trolley dilemmas are subject to morally irrelevant factors such as order (Liao et al., 2012), frame (Cao et al., 2017), or mood (Pastötter et al., 2013). However, the fact that there are wrong or biased intuitions  does  not  mean  that  intuitions  do not  have any  epistemic or  moral  value. Dismissing intuitions because they are subject to implicit psychological factors in favour of armchair ethical theorizing is inconsistent. Empirical evidence should play a role in normative theorizing on trolley dilemmas as long as ethical theorizing is also subject to implicit  psychological  factors—and  which  experimental  research  can  help  to  make explicit (Kahane, 2013).  

The second option states that what should be done as public policy on sacrificial dilemmas is what the majority of people say or do in those situations. In other words, the descriptive results of the experiments show us how we should act at the normative level. Consider the following example from the debate of self-driving vehicles: “We thus argue that any implementation of an ethical decision-making system for a specific context should be based on human decisions made in the same context” (Sütfeld et al., 2017). So, as most people act in a utilitarian way in VR simulations of traffic dilemmas, autonomous cars should act similarly in analogous situations (Sütfeld et al. 2017).

Saturday, February 12, 2022

Privacy and digital ethics after the pandemic

Carissa Véliz
Nature Electronics
VOL 4 | January 2022, 10, 11.

The coronavirus pandemic has permanently changed our relationship with technology, accelerating the drive towards digitization. While this change has brought advantages, such as increased opportunities to work from home and innovations in e-commerce, it has also been accompanied with steep drawbacks,
which include an increase in inequality and undesirable power dynamics.

Power asymmetries in the digital age have been a worry since big tech became big.  Technophiles have often argued that if users are unhappy about online services, they can always opt-out. But opting-out has not felt like a meaningful alternative for years for at least two reasons.  

First, the cost of not using certain services can amount to a competitive disadvantage — from not seeing a job advert to not having access to useful tools being used by colleagues. When a platform becomes too dominant, asking people not to use it is like asking them to refrain from being full participants in society. Second, platforms such as Facebook and Google are unavoidable — no one who has an online life can realistically steer clear of them. Google ads and their trackers creep throughout much of the Internet, and Facebook has shadow profiles on netizens even when they have never had an account on the platform.

(cut)

Reasons for optimism

Despite the concerning trends regarding privacy and digital ethics during the pandemic, there are reasons to be cautiously optimistic about the future.  First, citizens around the world are increasingly suspicious of tech companies, and are gradually demanding more from them. Second, there is a growing awareness that the lack of privacy ingrained in current apps entails a national security risk, which can motivate governments into action. Third, US President Joe Biden seems eager to collaborate with the international community, in contrast to his predecessor. Fourth, regulators in the US are seriously investigating how to curtail tech’s power, as evidenced by the Department of Justice’s antitrust lawsuit against Google and the Federal Trade Commission’s (FTC) antitrust lawsuit against Facebook.  Amazon and YouTube have also been targeted by the FTC for a privacy investigation. With discussions of a federal privacy law becoming more common in the US, it would not be surprising to see such a development in the next few years. Tech regulation in the US could have significant ripple effects elsewhere.

Friday, February 11, 2022

Social Neuro AI: Social Interaction As the "Dark Matter" of AI

S. Bolotta & G. Dumas
arxiv.org
Originally published 4 JAN 22

Abstract

We are making the case that empirical results from social psychology and social neuroscience along with the framework of dynamics can be of inspiration to the development of more intelligent artificial agents. We specifically argue that the complex human cognitive architecture owes a large portion of its expressive power to its ability to engage in social and cultural learning. In the first section, we aim at demonstrating that social learning plays a key role in the development of intelligence. We do so by discussing social and cultural learning theories and investigating the abilities that various animals have at learning from others; we also explore findings from social neuroscience that examine human brains during social interaction and learning. Then, we discuss three proposed lines of research that fall under the umbrella of Social NeuroAI and can contribute to developing socially intelligent embodied agents in complex environments. First, neuroscientific theories of cognitive architecture, such as the global workspace theory and the attention schema theory, can enhance biological plausibility and help us understand how we could bridge individual and social theories of intelligence. Second, intelligence occurs in time as opposed to over time, and this is naturally incorporated by the powerful framework offered by dynamics. Third, social embodiment has been demonstrated to provide social interactions between virtual agents and humans with a more sophisticated array of communicative signals. To conclude, we provide a new perspective on the field of multiagent robot systems, exploring how it can advance by following the aforementioned three axes.

Conclusion

At the crossroads of robotics, computer science, and psychology, one of the main challenges for humans is to build autonomous agents capable of participating in cooperative social interactions. This is important not only because AI will play a crucial role in our daily life, but also because, as demonstrated by results in social neuroscience and evolutionary psychology, intrapersonal intelligence is tightly connected with interpersonal intelligence, especially in humans Dumas et al. [2014a]. In this opinion article, we have attempted to unify the lines of research that, at the moment, are separated from each other; in particular, we have proposed three research directions that are expected to enhance efficient exchange of information between agents and, as a consequence, individual intelligence (especially in out-of-distribution generalization: OOD). This would contribute to creating agents that not only do have humanlike OOD skills, but are also able to exhibit such skills in extremely complex and realistic environments Dennis et al.
[2021], while interacting with other embodied agents and with humans.


Thursday, February 10, 2022

Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them

Santoni de Sio, F., Mecacci, G. 
Philos. Technol. 34, 1057–1084 (2021). 
https://doi.org/10.1007/s13347-021-00450-x

Abstract

The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems – gaps in culpability, moral and public accountability, active responsibility—caused by different sources, some technical, other organisational, legal, ethical, and societal. Responsibility gaps may also happen with non-learning systems. The paper clarifies which aspect of AI may cause which gap in which form of responsibility, and why each of these gaps matter. It proposes a critical review of partial and non-satisfactory attempts to address the responsibility gap: those which present it as a new and intractable problem (“fatalism”), those which dismiss it as a false problem (“deflationism”), and those which reduce it to only one of its dimensions or sources and/or present it as a problem that can be solved by simply introducing new technical and/or legal tools (“solutionism”). The paper also outlines a more comprehensive approach to address the responsibility gaps with AI in their entirety, based on the idea of designing socio-technical systems for “meaningful human control", that is systems aligned with the relevant human reasons and capacities.

(cut)

The Tracing Conditions and its Payoffs for Responsibility

Unlike proposals based on new forms of legal liability, MHC (Meaningful Human Control) proposes that socio-technical systems are also systematically designed to avoid gaps in moral culpability, accountability, and active responsibility. The “tracing condition” proposes that a system can remain under MHC only in the presence of a solid alignment between the system and the technical, motivational, moral capacities of the relevant agents involved, with different roles, in the design, control, and use of the system. The direct goal of this condition is promoting a fair distribution of moral culpability, thereby avoiding two undesired results: first, scapegoating, i.e. agents being held culpable without having a fair capacity to avoid wrongdoing (Elish, 2019): in the example of the automated driving systems above, for instance, the drivers’ relevant technical and motivational capacities not being sufficiently studied and trained. Second, impunity for avoidable accidents, i.e. culpability gaps: the impossibility to legitimately blame anybody as no individual agent possesses all the relevant capacities, e.g. the managers/designers having the technical capacity but not the moral motivation to avoid accidents and the drivers having the motivation but not the skills. The tracing condition also helps addressing accountability and active responsibility gaps. If a person or organisation should be morally or publicly accountable, then they must also possess the specific capacity to discharge this duty: according to another example discussed above, if a doctor has to remain accountable to their patients for her decisions, then she should maintain the capacity and motivation to understand the functioning of the AI system she uses and to explain her decision to the patients.