Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Responsibility. Show all posts
Showing posts with label Responsibility. Show all posts

Friday, November 3, 2023

Posthumanism’s Revolt Against Responsibility

Nolen Gertz
Commonweal Magazine
Originally published 31 Oct 23

Here is an excerpt:

A major problem with this view—one Kirsch neglects—is that it conflates the destructiveness of particular humans with the destructiveness of humanity in general. Acknowledging that climate change is driven by human activity should not prevent us from identifying precisely which humans and activities are to blame. Plenty of people are concerned about climate change and have altered their behavior by, for example, using public transportation, recycling, or being more conscious about what they buy. Yet this individual behavior change is not sufficient because climate change is driven by the large-scale behavior of corporations and governments.

In other words, it is somewhat misleading to say we have entered the “Anthropocene” because anthropos is not as a whole to blame for climate change. Rather, in order to place the blame where it truly belongs, it would be more appropriate—as Jason W. Moore, Donna J. Haraway, and others have argued—to say we have entered the “Capitalocene.” Blaming humanity in general for climate change excuses those particular individuals and groups actually responsible. To put it another way, to see everyone as responsible is to see no one as responsible. Anthropocene antihumanism is thus a public-relations victory for the corporations and governments destroying the planet. They can maintain business as usual on the pretense that human nature itself is to blame for climate change and that there is little or nothing corporations or governments can or should do to stop it, since, after all, they’re only human.

Kirsch does not address these straightforward criticisms of Anthropocene antihumanism. This throws into doubt his claim that he is cataloguing their views to judge whether they are convincing and to explore their likely impact. Kirsch does briefly bring up the activist Greta Thunberg as a potential opponent of the nihilistic antihumanists, but he doesn’t consider her challenge in depth. 

Here is my summary:

Anthropocene antihumanism is a pessimistic view that sees humanity as a destructive force on the planet. It argues that humans have caused climate change, mass extinctions, and other environmental problems, and that we are ultimately incapable of living in harmony with nature. Some Anthropocene antihumanists believe that humanity should go extinct, while others believe that we should radically change our way of life in order to avoid destroying ourselves and the planet.

Some bullets
  • Posthumanism is a broad philosophical movement that challenges the traditional view of what it means to be human.
  • Anthropocene antihumanism and transhumanism are two strands of posthumanism that share a common theme of revolt against responsibility.
  • Anthropocene antihumanists believe that humanity is so destructive that it is beyond redemption, and that we should therefore either go extinct or give up our responsibility to manage the planet.
  • Transhumanists believe that we can transcend our human limitations and create a new, posthuman species that is not bound by the same moral and ethical constraints as humans.
  • Kirsch argues that this revolt against responsibility is a dangerous trend, and that we should instead work to create a more sustainable and just future for all.

Saturday, July 22, 2023

Generative AI companies must publish transparency reports

A. Narayanan and S. Kapoor
Knight First Amendment Institute
Originally published 26 June 23

Here is an excerpt:

Transparency reports must cover all three types of harms from AI-generated content

There are three main types of harms that may result from model outputs.

First, generative AI tools could be used to harm others, such as by creating non-consensual deepfakes or child sexual exploitation materials. Developers do have policies that prohibit such uses. For example, OpenAI's policies prohibit a long list of uses, including the use of its models to generate unauthorized legal, financial, or medical advice for others. But these policies cannot have real-world impact if they are not enforced, and due to platforms' lack of transparency about enforcement, we have no idea if they are effective. Similar challenges in ensuring platform accountability have also plagued social media in the past; for instance, ProPublica reporters repeatedly found that Facebook failed to fully remove discriminatory ads from its platform despite claiming to have done so.

Sophisticated bad actors might use open-source tools to generate content that harms others, so enforcing use policies can never be a comprehensive solution. In a recent essay, we argued that disinformation is best addressed by focusing on its distribution (e.g., on social media) rather than its generation. Still, some actors will use tools hosted in the cloud either due to convenience or because the most capable models don’t tend to be open-source. For these reasons, transparency is important for cloud-based generative AI.

Second, users may over-rely on AI for factual information, such as legal, financial, or medical advice. Sometimes they are simply unaware of the tendency of current chatbots to frequently generate incorrect information. For example, a user might ask "what are the divorce laws in my state?" and not know that the answer is unreliable. Alternatively, the user might be harmed because they weren’t careful enough to verify the generated information, despite knowing that it might be inaccurate. Research on automation bias shows that people tend to over-rely on automated tools in many scenarios, sometimes making more errors than when not using the tool.

ChatGPT includes a disclaimer that it sometimes generates inaccurate information. But OpenAI has often touted its performance on medical and legal exams. And importantly, the tool is often genuinely useful at medical diagnosis or legal guidance. So, regardless of whether it’s a good idea to do so, people are in fact using it for these purposes. That makes harm reduction important, and transparency is an important first step.

Third, generated content could be intrinsically undesirable. Unlike the previous types, here the harms arise not because of users' malice, carelessness, or lack of awareness of limitations. Rather, intrinsically problematic content is generated even though it wasn’t requested. For example, Lensa's avatar creation app generated sexualized images and nudes when women uploaded their selfies. Defamation is also intrinsically harmful rather than a matter of user responsibility. It is no comfort to the target of defamation to say that the problem would be solved if every user who might encounter a false claim about them were to exercise care to verify it.

Quick summary: 

The call for transparency reports aims to increase accountability and understanding of the inner workings of generative AI models. By disclosing information about the data used to train the models, the companies can address concerns regarding potential biases and ensure the ethical use of their technology.

Transparency reports could include details about the sources and types of data used, the demographics represented in the training data, any data augmentation techniques applied, and potential biases detected or addressed during model development. This information would enable users, policymakers, and researchers to evaluate the capabilities and limitations of the generative AI systems.

Saturday, June 10, 2023

Generative AI entails a credit–blame asymmetry

Porsdam Mann, S., Earp, B. et al. (2023).
Nature Machine Intelligence.

The recent releases of large-scale language models (LLMs), including OpenAI’s ChatGPT and GPT-4, Meta’s LLaMA, and Google’s Bard have garnered substantial global attention, leading to calls for urgent community discussion of the ethical issues involved. LLMs generate text by representing and predicting statistical properties of language. Optimized for statistical patterns and linguistic form rather than for
truth or reliability, these models cannot assess the quality of the information they use.

Recent work has highlighted ethical risks that are associated with LLMs, including biases that arise from training data; environmental and socioeconomic impacts; privacy and confidentiality risks; the perpetuation of stereotypes; and the potential for deliberate or accidental misuse. We focus on a distinct set of ethical questions concerning moral responsibility—specifically blame and credit—for LLM-generated
content. We argue that different responsibility standards apply to positive and negative uses (or outputs) of LLMs and offer preliminary recommendations. These include: calls for updated guidance from policymakers that reflect this asymmetry in responsibility standards; transparency norms; technology goals; and the establishment of interactive forums for participatory debate on LLMs.‌


Credit–blame asymmetry may lead to achievement gaps

Since the Industrial Revolution, automating technologies have made workers redundant in many industries, particularly in agriculture and manufacturing. The recent assumption25 has been that creatives
and knowledge workers would remain much less impacted by these changes in the near-to-mid-term future. Advances in LLMs challenge this premise.

How these trends will impact human workforces is a key but unresolved question. The spread of AI-based applications and tools such as LLMs will not necessarily replace human workers; it may simply
shift them to tasks that complement the functions of the AI. This may decrease opportunities for human beings to distinguish themselves or excel in workplace settings. Their future tasks may involve supervising or maintaining LLMs that produce the sorts of outputs (for example, text or recommendations) that skilled human beings were previously producing and for which they were receiving credit. Consequently, work in a world relying on LLMs might often involve ‘achievement gaps’ for human beings: good, useful outcomes will be produced, but many of them will not be achievements for which human workers and professionals can claim credit.

This may result in an odd state of affairs. If responsibility for positive and negative outcomes produced by LLMs is asymmetrical as we have suggested, humans may be justifiably held responsible for negative outcomes created, or allowed to happen, when they or their organizations make use of LLMs. At the same time, they may deserve less credit for AI-generated positive outcomes, as they may not be displaying the skills and talents needed to produce text, exerting judgment to make a recommendation, or generating other creative outputs.

Saturday, May 27, 2023

Costly Distractions: Focusing on Individual Behavior Undermines Support for Systemic Reforms

Hagmann, D., Liao, Y., Chater, N., & 
Loewenstein, G. (2023, April 22). 


Policy challenges can typically be addressed both through systemic changes (e.g., taxes and mandates) and by encouraging individual behavior change. In this paper, we propose that, while in principle complementary, systemic and individual perspectives can compete for the limited attention of people and policymakers. Thus, directing policies in one of these two ways can distract the public’s attention from the other—an “attentional opportunity cost.” In two pre-registered experiments (n = 1,800) covering three high-stakes domains (climate change, retirement savings, and public health), we show that when people learn about policies targeting individual behavior (such as awareness campaigns), they are more likely to themselves propose policies that target individual behavior, and to hold individuals rather than organizational actors responsible for solving the problem, than are people who learned about systemic policies (such as taxes and mandates, Study 1). This shift in attribution of responsibility has behavioral consequences: people exposed to individual interventions are more likely to donate to an organization that educates individuals rather than one seeking to effect systemic reforms (Study 2). Policies targeting individual behavior may, therefore, have the unintended consequence of redirecting attention and attributions of responsibility away from systemic change to individual behavior.


Major policy problems likely require a realignment of systemic incentives and regulations, as well as measures aimed at individual behavior change. In practice, systemic reforms have been difficult to implement, in part due to political polarization and in part because concentrated interest groups have lobbied against changes that threaten their profits. This has shifted the focus to individual behavior. The past two decades, in particular, have seen increasing popularity of ‘nudges’: interventions that can influence individual behavior without substantially changing economic incentives (Thaler &Sunstein, 2008). For example, people may be defaulted into green energy plans (Sunstein &Reisch, 2013) or 401(k) contributions (Madrian & Shea, 2001), and restaurants may varywhether they place calorie labels on the left or the right side of the menu (Dallas, Liu, &Ubel, 2019). These interventions have enjoyed tremendous popularity, because they can often be implemented even when opposition to systemic reforms is too large to change economic incentives. Moreover, it has been argued that nudges incur low economic costs, making them extremely cost effective even when the gains are small on an absolute scaleTor & Klick (2022).

In this paper, we document an important and so far unacknowledged cost of such interventions targeting individual behavior, first postulated by Chater and Loewenstein(2022). We show that when people learn about interventions that target individual behavior, they shift their attention away from systemic reforms compared to those who learn about systemic reforms. Across two experiments, we find that this subsequently  affects their attitudes and behaviors. Specifically, they become less likely to propose systemic policy reforms, hold governments less responsible for solving the policy problem, and are less likely to support organizations that seek to promote systemic reform.The findings of this study may not be news to corporate PR specialists. Indeed, as would be expected according to standard political economy considerations (e.g., Stigler,1971), organizations act in a way that is consistent with a belief in this attentional opportunity cost account. Initiatives that have captured the public’s attention, including recycling campaigns and carbon footprint calculators, have been devised by the very organizations that stood to lose from further regulation that might have hurt their bottomline (e.g., bottle bills and carbon taxes, respectively), potentially distracting individual citizens, policymakers, and the wider public debate from systemic changes that are likely to be required to shift substantially away from the status quo.

Monday, April 24, 2023

ChatGPT in the Clinic? Medical AI Needs Ethicists

Emma Bedor Hiland
The Hastings Center
Originally published by 10 MAR 23

Concerns about the role of artificial intelligence in our lives, particularly if it will help us or harm us, improve our health and well-being or work to our detriment, are far from new. Whether 2001: A Space Odyssey’s HAL colored our earliest perceptions of AI, or the much more recent M3GAN, these questions are not unique to the contemporary era, as even the ancient Greeks wondered what it would be like to live alongside machines.

Unlike ancient times, today AI’s presence in health and medicine is not only accepted, it is also normative. Some of us rely upon FitBits or phone apps to track our daily steps and prompt us when to move or walk more throughout our day. Others utilize chatbots available via apps or online platforms that claim to improve user mental health, offering meditation or cognitive behavioral therapy. Medical professionals are also open to working with AI, particularly when it improves patient outcomes. Now the availability of sophisticated chatbots powered by programs such as OpenAI’s ChatGPT have brought us closer to the possibility of AI becoming a primary source in providing medical diagnoses and treatment plans.

Excitement about ChatGPT was the subject of much media attention in late 2022 and early 2023. Many in the health and medical fields were also eager to assess the AI’s abilities and applicability to their work. One study found ChatGPT adept at providing accurate diagnoses and triage recommendations. Others in medicine were quick to jump on its ability to complete administrative paperwork on their behalf. Other research found that ChatGPT reached, or came close to reaching, the passing threshold for United States Medical Licensing Exam.

Yet the public at large is not as excited about an AI-dominated medical future. A study from the Pew Research Center found that most Americans are “uncomfortable” with the prospect of AI-provided medical care. The data also showed widespread agreement that AI will negatively affect patient-provider relationships, and that the public is concerned health care providers will adopt AI technologies too quickly, before they fully understanding the risks of doing so.

In sum: As AI is increasingly used in healthcare, this article argues that there is a need for ethical considerations and expertise to ensure that these systems are designed and used in a responsible and beneficial manner. Ethicists can play a vital role in evaluating and addressing the ethical implications of medical AI, particularly in areas such as bias, transparency, and privacy.

Sunday, April 23, 2023

Produced and counterfactual effort contribute to responsibility attributions in collaborative tasks

Xiang, Y., Landy, J., et al. (2023, March 8). 


How do people judge responsibility in collaborative tasks? Past work has proposed a number of metrics that people may use to attribute blame and credit to others, such as effort, competence, and force. Some theories consider only the produced effort or force (individuals are more responsible if they produce more effort or force), whereas others consider counterfactuals (individuals are more responsible if some alternative behavior on their or their collaborator's part could have altered the outcome). Across four experiments (N = 717), we found that participants’ judgments are best described by a model that considers both produced and counterfactual effort. This finding generalized to an independent validation data set (N = 99). Our results thus support a dual-factor theory of responsibility attribution in collaborative tasks.

General discussion

Responsibility for the outcomes of collaborations is often distributed unevenly. For example, the lead author on a project may get the bulk of the credit for a scientific discovery, the head of a company may  shoulder the blame for a failed product, and the lazier of two friends may get the greater share of blame  for failing to lift a couch.  However, past work has provided conflicting accounts of the computations that drive responsibility attributions in collaborative tasks.  Here, we compared each of these accounts against human responsibility attributions in a simple collaborative task where two agents attempted to lift a box together.  We contrasted seven models that predict responsibility judgments based on metrics proposed in past work, comprising three production-style models (Force, Strength, Effort), three counterfactual-style models (Focal-agent-only, Non-focal-agent-only, Both-agent), and one Ensemble model that combines the best-fitting production- and counterfactual-style models.  Experiment 1a and Experiment 1b showed that theEffort model and the Both-agent counterfactual model capture the data best among the production-style models and the counterfactual-style models, respectively.  However, neither provided a fully adequate fit on their own.  We then showed that predictions derived from the average of these two models (i.e., the Ensemble model) outperform all other models, suggesting that responsibility judgments are likely a combination of production-style reasoning and counterfactual reasoning.  Further evidence came from analyses performed on individual participants, which revealed that he Ensemble model explained more participants’ data than any other model.  These findings were subsequently supported by Experiment 2a and Experiment 2b, which replicated the results when additional force information was shown to the participants, and by Experiment 3, which validated the model predictions with a broader range of stimuli.

Summary: Effort exerted by each member & counterfactual thinking play a crucial role in attributing responsibility for success or failure in collaborative tasks. This study suggests that higher effort leads to more responsibility for success, while lower effort leads to more responsibility for failure.

Tuesday, January 10, 2023

San Francisco will allow police to deploy robots that kill

Janie Har
Associated Press
Originally posted 29 Nov 22

Supervisors in San Francisco voted Tuesday to give city police the ability to use potentially lethal, remote-controlled robots in emergency situations -- following an emotionally charged debate that reflected divisions on the politically liberal board over support for law enforcement.

The vote was 8-3, with the majority agreeing to grant police the option despite strong objections from civil liberties and other police oversight groups. Opponents said the authority would lead to the further militarization of a police force already too aggressive with poor and minority communities.

Supervisor Connie Chan, a member of the committee that forwarded the proposal to the full board, said she understood concerns over use of force but that “according to state law, we are required to approve the use of these equipments. So here we are, and it’s definitely not a easy discussion.”

The San Francisco Police Department said it does not have pre-armed robots and has no plans to arm robots with guns. But the department could deploy robots equipped with explosive charges “to contact, incapacitate, or disorient violent, armed, or dangerous suspect” when lives are at stake, SFPD spokesperson Allison Maxie said in a statement.

“Robots equipped in this manner would only be used in extreme circumstances to save or prevent further loss of innocent lives,” she said.

Supervisors amended the proposal Tuesday to specify that officers could use robots only after using alternative force or de-escalation tactics, or concluding they would not be able to subdue the suspect through those alternative means. Only a limited number of high-ranking officers could authorize use of robots as a deadly force option.

Saturday, August 27, 2022

Counterfactuals and the logic of causal selection

Quillien, T., & Lucas, C. G. (2022, June 13)


Everything that happens has a multitude of causes, but people make causal judgments effortlessly. How do people select one particular cause (e.g. the lightning bolt that set the forest ablaze) out of the set of factors that contributed to the event (the oxygen in the air, the dry weather. . . )? Cognitive scientists have suggested that people make causal judgments about an event by simulating alternative ways things could have happened. We argue that this counterfactual theory explains many features of human causal intuitions, given two simple assumptions. First, people tend to imagine counterfactual possibilities that are both a priori likely and similar to what actually happened. Second, people judge that a factor C caused effect E if C and E are highly correlated across these counterfactual possibilities. In a reanalysis of existing empirical data, and a set of new experiments, we find that this theory uniquely accounts for people’s causal intuitions.

From the General Discussion

Judgments of causation are closely related to assignments of blame, praise, and moral responsibility.  For instance, when two cars crash at an intersection, we say that the accident was caused by the driver who went through a red light (not by the driver who went through a green light; Knobe and Fraser, 2008; Icard et al., 2017; Hitchcock and Knobe, 2009; Roxborough and Cumby, 2009; Alicke, 1992; Willemsen and Kirfel, 2019); and we also blame that driver for the accident. According to some theorists, the fact that we judge the norm-violator to be blameworthy or morally responsible explains why we judge that he was the cause of the accident. This might be because our motivation to blame distorts our causal judgment (Alicke et al., 2011), because our intuitive concept of causation is inherently normative (Sytsma, 2021), or because of pragmatics confounds in the experimental tasks that probe the effect of moral violations on causal judgment (Samland & Waldmann, 2016).

Under these accounts, the explanation for why moral considerations affect causal judgment should be completely different than the explanation for why other factors (e.g.,prior probabilities, what happened in the actual world, the causal structure of the situation) affect causal judgment. We favor a more parsimonious account: the counterfactual approach to causal judgment (of which our theory is one instantiation) provides a unifying explanation for the influence of both moral and non-moral considerations on causal judgment (Hitchcock & Knobe, 2009)16.

Finally, many formal theories of causal reasoning aim to model how people make causal inferences (e.g. Cheng, 1997; Griffiths & Tenenbaum, 2005; Lucas & Griffiths, 2010; Bramley et al., 2017; Jenkins & Ward, 1965). These theories are not concerned with the problem of causal selection, the focus of the present paper. It is in principle possible that people use the same algorithms they use for causal inference when they engage in causal selection, but in practice models of causal inference have not been able to predict how people select causes (see Quillien and Barlev, 2022; Morris et al., 2019).

Friday, June 17, 2022

Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making

S. Tolmeijer, M. Christen, et al.
In CHI Conference on Human Factors in 
Computing Systems (CHI '22), April 29-May 5,
2022, New Orleans, LA, USA. ACM

While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants’ reliance on AI: AI recommendations and decisions are accepted more often than the human expert's. However, AI team experts are perceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead.

From the Discussion Section

Design implications for ethical AI

In sum, we find that participants had slightly higher moral trust and more responsibility ascription towards human experts, but higher capacity trust, overall trust, and reliance on AI. These different perceived capabilities could be combined in some form of human-AI collaboration. However, lack of responsibility of the AI can be a problem when AI for ethical decision making is implemented. When a human expert is involved but has less autonomy, they risk becoming a scapegoat for the decisions that the AI proposed in case of negative outcomes.

At the same time, we find that the different levels of autonomy, i.e., the human-in-the-loop and human-on-the-loop setting, did not influence the trust people had, the responsibility they assigned (both to themselves and the respective experts), and the reliance they displayed. A large part of the discussion on usage of AI has focused on control and the level of autonomy that the AI gets for different tasks. However, our results suggest that this has less of an influence, as long a human is appointed to be responsible in the end. Instead, an important focus of designing AI for ethical decision making should be on the different types of trust users show for a human vs. AI expert.

One conclusion of this finding that the control conditions of AI may be of less relevance than expected is that the focus on human-AI collaboration should be less on control and more on how the involvement of AI improves human ethical decision making. An important factor in that respect will be the time available for actual decision making: if time is short, AI advice or decisions should make clear which value was guiding in the decision process (e.g., maximizing the expected number of people to be saved irrespective of any characteristics of the individuals involved), such that the human decider can make (or evaluate) the decision in an ethically informed way. If time for deliberation is available, a AI decision support system could be designed in a way to counteract human biases in ethical decision making (e.g., point to the possibility that human deciders solely focus on utility maximization and in this way neglecting fundamental rights of individuals) such that those biases can become part of the deliberation process.

Sunday, March 13, 2022

Do Obligations Follow the Mind or Body?

Protzko, J., Tobia, K., Strohminger, N.,
& Schooler, J.  (2022, February 7). 
Retrieved from psyarxiv.com/m5a6g


Do you persist as the same person over time because you keep the same mind or because you keep the same body? Philosophers have long investigated this question of personal identity with thought experiments. Cognitive scientists have joined this tradition by assessing lay intuitions about those cases. Much of this work has focused on judgments of identity continuity. But identity also has practical significance: obligations are tagged to one’s identity over time. Understanding how someone persists as the same person over time could provide insight into how and why moral and legal obligations persist. In this paper, we investigate judgments of obligations in hypothetical cases where a person’s mind and body diverge (e.g., brain transplant cases). We find a striking pattern of results: In assigning obligations in these identity test cases, people are divided among three groups: “body-followers”, “mind-followers”, and “splitters”—people who say that the obligation is split between the mind and the body. Across studies, responses are predicted by a variety of factors, including mind/body dualism, essentialism, education, and professional training. When we give this task to professional lawyers, accountants, and bankers, we find they are more inclined to rely on bodily continuity in tracking obligations. These findings reveal not only the heterogeneity of intuitions about identity, but how these intuitions relate to the legal standing of an individual’s obligations.

From the General Discussion

Whether one is a mind-follower, body-follower, or splitter was predicted by several psychological traits, suggesting that participants’ decisions were not arbitrary. Furthermore, the use of comprehension checks did not moderate the results, so the variety of assigning obligations were not due to participants not understanding the scenarios. We found physical essentialism and mind/body dualism predict body-following; while the best educated participants are more likely mind-followers and the least educated are more likely splitters. The professional experts were more likely to be body-followers.

Essentialism predicted the belief that obligations track the body. This may seem mysterious, until we consider that much of essentialism has to do with tracking a physical (if invisible) properties. Here is a sample item from the Beliefs in Essentialism Scale: Trying on a sweater that Hitler wore, even if it was washed thoroughly beforehand, would make me very uncomfortable (Horne & Cimpian, 2019). If someone believes that essences are physically real in this way, it makes sense that they would also believe that obligations and identity go with the body. 

Consideration of specific items in the Mind/body Dualism Scale (Nadelhoffer et al., 2014) similarly offer insight into its relationship with the continuity of obligation in this study. Items like Human action can only be understood in terms of our souls and minds and not just in terms of our brains, indicate that for mind/body dualists, a person is not reducible to their brain. Accordingly, for mind/body dualists, though the brain may change, something else remains in the body that maintains both identity and obligations.

Thursday, February 10, 2022

Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them

Santoni de Sio, F., Mecacci, G. 
Philos. Technol. 34, 1057–1084 (2021). 


The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems – gaps in culpability, moral and public accountability, active responsibility—caused by different sources, some technical, other organisational, legal, ethical, and societal. Responsibility gaps may also happen with non-learning systems. The paper clarifies which aspect of AI may cause which gap in which form of responsibility, and why each of these gaps matter. It proposes a critical review of partial and non-satisfactory attempts to address the responsibility gap: those which present it as a new and intractable problem (“fatalism”), those which dismiss it as a false problem (“deflationism”), and those which reduce it to only one of its dimensions or sources and/or present it as a problem that can be solved by simply introducing new technical and/or legal tools (“solutionism”). The paper also outlines a more comprehensive approach to address the responsibility gaps with AI in their entirety, based on the idea of designing socio-technical systems for “meaningful human control", that is systems aligned with the relevant human reasons and capacities.


The Tracing Conditions and its Payoffs for Responsibility

Unlike proposals based on new forms of legal liability, MHC (Meaningful Human Control) proposes that socio-technical systems are also systematically designed to avoid gaps in moral culpability, accountability, and active responsibility. The “tracing condition” proposes that a system can remain under MHC only in the presence of a solid alignment between the system and the technical, motivational, moral capacities of the relevant agents involved, with different roles, in the design, control, and use of the system. The direct goal of this condition is promoting a fair distribution of moral culpability, thereby avoiding two undesired results: first, scapegoating, i.e. agents being held culpable without having a fair capacity to avoid wrongdoing (Elish, 2019): in the example of the automated driving systems above, for instance, the drivers’ relevant technical and motivational capacities not being sufficiently studied and trained. Second, impunity for avoidable accidents, i.e. culpability gaps: the impossibility to legitimately blame anybody as no individual agent possesses all the relevant capacities, e.g. the managers/designers having the technical capacity but not the moral motivation to avoid accidents and the drivers having the motivation but not the skills. The tracing condition also helps addressing accountability and active responsibility gaps. If a person or organisation should be morally or publicly accountable, then they must also possess the specific capacity to discharge this duty: according to another example discussed above, if a doctor has to remain accountable to their patients for her decisions, then she should maintain the capacity and motivation to understand the functioning of the AI system she uses and to explain her decision to the patients.

Wednesday, December 8, 2021

Robot Evolution: Ethical Concerns

Eiban, A.E., Ellers, J, et al.
Front. Robot. AI, 03 November 2021


Rapid developments in evolutionary computation, robotics, 3D-printing, and material science are enabling advanced systems of robots that can autonomously reproduce and evolve. The emerging technology of robot evolution challenges existing AI ethics because the inherent adaptivity, stochasticity, and complexity of evolutionary systems severely weaken human control and induce new types of hazards. In this paper we address the question how robot evolution can be responsibly controlled to avoid safety risks. We discuss risks related to robot multiplication, maladaptation, and domination and suggest solutions for meaningful human control. Such concerns may seem far-fetched now, however, we posit that awareness must be created before the technology becomes mature.


Robot evolution is not science fiction anymore. The theory and the algorithms are available and robots are already evolving in computer simulations, safely limited to virtual worlds. In the meanwhile, the technology for real-world implementations is developing rapidly and the first (semi-) autonomously reproducing and evolving robots are likely to arrive within a decade (Hale et al., 2019; Buchanan et al., 2020). Current research in this area is typically curiosity-driven, but will increasingly become more application-oriented as evolving robot systems can be employed in hostile or inaccessible environments, like seafloors, rain-forests, ultra-deep mines or other planets, where they develop themselves “on the job” without the need for direct human oversight.

A key insight of this paper is that the practice of second order engineering, as induced by robot evolution, raises new issues outside the current discourse on AI and robot ethics. Our main message is that awareness must be created before the technology becomes mature and researchers and potential users should discuss how robot evolution can be responsibly controlled. Specifically, robot evolution needs careful ethical and methodological guidelines in order to minimize potential harms and maximize the benefits. Even though the evolutionary process is functionally autonomous without a “steering wheel” it still entails a necessity to assign responsibilities. This is crucial not only with respect to holding someone responsible if things go wrong, but also to make sure that people take responsibility for certain aspects of the process–without people taking responsibility, the process cannot be effectively controlled. Given the potential benefits and harms and the complicated control issues, there is an urgent need to follow up our ideas and further think about responsible robot evolution.

Tuesday, August 3, 2021

Get lucky: Situationism and circumstantial moral luck

Marcela Herdova & Stephen Kearns 
(2015) Philosophical Explorations, 18:3, 362-377
DOI: 10.1080/13869795.2015.1026923


Situationism is, roughly, the thesis that normatively irrelevant environmental factors have a great impact on our behaviour without our being aware of this influence. Surprisingly, there has been little work done on the connection between situationism and moral luck. Given that it is often a matter of luck what situations we find ourselves in, and that we are greatly influenced by the circumstances we face, it seems also to be a matter of luck whether we are blameworthy or praiseworthy for our actions in those circumstances. We argue that such situationist moral luck, as a variety of circumstantial moral luck, exemplifies a distinct and interesting type of moral luck. Further, there is a case to be made that situationist moral luck is perhaps more worrying than some other well-discussed cases of (supposed) moral luck.

From the Conclusion

Those who insist on the significance of luck to our practices of moral assessment are on somewhat of a tightrope. If we consider agents who differ only in the external results of their actions, and who are faced with normatively similar circumstances, it is difficult to maintain that there is any major difference in the degree of such agents’ moral responsibility. If we consider agents that differ rather significantly, and face normatively distinct situations, then though luck may play a role in what normative circumstances they face, there is much to base a moral assessment on that is either under the agents’ control or distinctive of each agent and their respective responses to their normative circumstances (or both). The role luck plays in our assessments of such agents, then, is arguably small enough that it is unclear that any difference in moral assessment can be properly said to be due  to this luck (at least to an extent that should worry us or that is inconsiderable tension with our usual moral thinking).

Monday, July 26, 2021

Do doctors engaging in advocacy speak for themselves or their profession?

Elizabeth Lanphier
Journal of Medical Ethics Blog
Originally posted 17 June 21

Here is an excerpt:

My concern is not the claim that expertise should be shared. (It should!) Nor do I think there is any neat distinction between physician responsibilities for individual health and public health. But I worry that when Strous and Karni alternately frame physician duties to “speak out” as individual duties and collective ones, they collapse necessary distinctions between the risks, benefits, and demands of these two types of obligations.

Many of us have various role-based individual responsibilities. We can have obligations as a parent, as a citizen, or as a professional. Having an individual responsibility as a physician involves duties to your patients, but also general duties to care in the event you are in a situation in which your expertise is needed (the “is there a doctor on this flight?” scenario).

Collective responsibility, on the other hand, is when a group has a responsibility as a group. The philosophical literature debates hard to resolve questions about what it means to be a “group,” and how groups come to have or discharge responsibilities. Collective responsibility raises complicated questions like: If physicians have a collective responsibility to speak out during the COVID-19 pandemic, does every physician has such an obligation? Does any individual physician?

Because individual obligations attribute duties to specific persons responsible for carrying them out in ways collective duties tend not to, I why individual physician obligations are attractive. But this comes with risks. One risk is that a physician speaks out as an individual, appealing to the authority of their medical credentials, but not in alignment with their profession.

In my essay I describe a family physician inviting his extended family for a holiday meal during a peak period of SARS-CoV-2 transmission because he didn’t think COVID-19 was a “big deal.”

More infamously, Dr. Scott Atlas served as Donald J. Trump’s coronavirus advisor, and although he is a physician, he did not have experience in public health, infectious disease, or critical care medicine applicable to COVID-19. Atlas was a physician speaking as a physician, but he routinely promoted views starkly different than those of physicians with expertise relevant to the pandemic, and the guidance coming from scientific and medical communities.

Monday, July 12, 2021

Workplace automation without achievement gaps: a reply to Danaher and Nyholm

Tigard, D.W. 
AI Ethics (2021). 


In a recent article in this journal, John Danaher and Sven Nyholm raise well-founded concerns that the advances in AI-based automation will threaten the values of meaningful work. In particular, they present a strong case for thinking that automation will undermine our achievements, thereby rendering our work less meaningful. It is also claimed that the threat to achievements in the workplace will open up ‘achievement gaps’—the flipside of the ‘responsibility gaps’ now commonly discussed in technology ethics. This claim, however, is far less worrisome than the general concerns for widespread automation, namely because it rests on several conceptual ambiguities. With this paper, I argue that although the threat to achievements in the workplace is problematic and calls for policy responses of the sort Danaher and Nyholm outline, when framed in terms of responsibility, there are no ‘achievement gaps’.

From the Conclusion

In closing, it is worth stopping to ask: Who exactly is the primary subject of “harm” (broadly speaking) in the supposed gap scenarios? Typically, in cases of responsibility gaps, the harm is seen as falling upon the person inclined to respond (usually with blame) and finding no one to respond to. This is often because they seek apologies or some sort of remuneration, and as we can imagine, it sets back their interests when such demands remain unfulfilled. But what about cases of achievement gaps? If we want to draw truly close analogies between the two scenarios, we would consider the subject of harm to be the person inclined to respond with praise and finding no one to praise. And perhaps there is some degree of disappointment here, but it hardly seems to be a worrisome kind of experience for that person. With this in mind, we might say there is yet another mismatch between responsibility gaps and achievement gaps. Nevertheless, on the account of Danaher and Nyholm, the harm is seen as falling upon the humans who miss out on achieving something in the workplace. But on that picture, we run into a sort of non-identity problem—for as soon as we identify the subjects of this kind of harm, we thereby affirm that it is not fitting to praise them for the workplace achievement, and so they cannot really be harmed in this way.

Wednesday, June 2, 2021

The clockwork universe: is free will an illusion?

Oliver Burkeman
The Guardian
Originally posted 27 APR 21

Here is an excerpt:

And Saul Smilansky, a professor of philosophy at the University of Haifa in Israel, who believes the popular notion of free will is a mistake, told me that if a graduate student who was prone to depression sought to study the subject with him, he would try to dissuade them. “Look, I’m naturally a buoyant person,” he said. “I have the mentality of a village idiot: it’s easy to make me happy. Nevertheless, the free will problem is really depressing if you take it seriously. It hasn’t made me happy, and in retrospect, if I were at graduate school again, maybe a different topic would have been preferable.”

Smilansky is an advocate of what he calls “illusionism”, the idea that although free will as conventionally defined is unreal, it’s crucial people go on believing otherwise – from which it follows that an article like this one might be actively dangerous. (Twenty years ago, he said, he might have refused to speak to me, but these days free will scepticism was so widely discussed that “the horse has left the barn”.) “On the deepest level, if people really understood what’s going on – and I don’t think I’ve fully internalised the implications myself, even after all these years – it’s just too frightening and difficult,” Smilansky said. “For anyone who’s morally and emotionally deep, it’s really depressing and destructive. It would really threaten our sense of self, our sense of personal value. The truth is just too awful here.”


By far the most unsettling implication of the case against free will, for most who encounter it, is what is seems to say about morality: that nobody, ever, truly deserves reward or punishment for what they do, because what they do is the result of blind deterministic forces (plus maybe a little quantum randomness).  "For the free will sceptic," writes Gregg Caruso in his new book Just Deserts, a collection of dialogues with fellow philosopher Daniel Dennett, "it is never fair to treat anyone as morally responsible." Were we to accept the full implications of that idea, the way we treat each other - and especially the way we treat criminals - might change beyond recognition.

Monday, March 8, 2021

Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy

Stix, C., Maas, M.M.
AI Ethics (2021). 


Recent progress in artificial intelligence (AI) raises a wide array of ethical and societal concerns. Accordingly, an appropriate policy approach is urgently needed. While there has been a wave of scholarship in this field, the research community at times appears divided amongst those who emphasize ‘near-term’ concerns and those focusing on ‘long-term’ concerns and corresponding policy measures. In this paper, we seek to examine this alleged ‘gap’, with a view to understanding the practical space for inter-community collaboration on AI policy. We propose to make use of the principle of an ‘incompletely theorized agreement’ to bridge some underlying disagreements, in the name of important cooperation on addressing AI’s urgent challenges. We propose that on certain issue areas, scholars working with near-term and long-term perspectives can converge and cooperate on selected mutually beneficial AI policy projects, while maintaining their distinct perspectives.

From the Conclusion

AI has raised multiple societal and ethical concerns. This highlights the urgent need for suitable and impactful policy measures in response. Nonetheless, there is at present an experienced fragmentation in the responsible AI policy community, amongst clusters of scholars focusing on ‘near-term’ AI risks, and those focusing on ‘longer-term’ risks. This paper has sought to map the practical space for inter-community collaboration, with a view towards the practical development of AI policy.

As such, we briefly provided a rationale for such collaboration, by reviewing historical cases of scientific community conflict or collaboration, as well as the contemporary challenges facing AI policy. We argued that fragmentation within a given community can hinder progress on key and urgent policies. Consequently, we reviewed a number of potential (epistemic, normative or pragmatic) sources of disagreement in the AI ethics community, and argued that these trade-offs are often exaggerated, and at any rate do not need to preclude collaboration. On this basis, we presented the novel proposal for drawing on the constitutional law principle of an ‘incompletely theorized agreement’, for the communities to set aside or suspend these and other disagreements for the purpose of achieving higher-order AI policy goals of both communities in selected areas. We, therefore, non-exhaustively discussed a number of promising shared AI policy areas which could serve as the sites for such agreements, while also discussing some of the overall limits of this framework.

Friday, March 5, 2021

Free to blame? Belief in free will is related to victim blaming

Genschow, O., & Vehlow, B.
Consciousness and Cognition
Volume 88, February 2021, 103074


The more people believe in free will, the harsher their punishment of criminal offenders. A reason for this finding is that belief in free will leads individuals to perceive others as responsible for their behavior. While research supporting this notion has mainly focused on criminal offenders, the perspective of the victims has been neglected so far. We filled this gap and hypothesized that individuals’ belief in free will is positively correlated with victim blaming—the tendency to make victims responsible for their bad luck. In three studies, we found that the more individuals believe in free will, the more they blame victims. Study 3 revealed that belief in free will is correlated with victim blaming even when controlling for just world beliefs, religious worldviews, and political ideology. The results contribute to a more differentiated view of the role of free will beliefs and attributed intentions.


• Past research indicated that belief in free will increases the perception of criminal offenders.

• However, this research ignored the perception of the victims.

• We filled this gap by conducting three studies.

• All studies find that belief in free will correlates with the tendency to blame victims.

From the Discussion

In the last couple of decades, claims that free will is nothing more than an illusion have become prevalent in the popular press (e.g., Chivers 2010; Griffin, 2016; Wolfe, 1997).  Based on such claims, scholars across disciplines started debating potential societal consequences for the case that people would start disbelieving in free will. For example, some philosophers argued that disbelief in free will would have catastrophic consequences, because people would no longer try to control their behavior and start acting immorally (e.g., Smilansky, 2000, 2002). Likewise, psychological research has mainly focused on the
downsides of disbelief in free will. For example, weakening free will belief led participants to behave less morally and responsibly (Baumeister et al., 2009; Protzko et al., 2016; Vohs & Schooler, 2008). In contrast to these results, our findings illustrate a more positive side of disbelief in free will, as higher levels of disbelief in free will would reduce victim blaming. 

Wednesday, February 17, 2021

Distributed Cognition and Distributed Morality: Agency, Artifacts and Systems

Heersmink, R. 
Sci Eng Ethics 23, 431–448 (2017). 


There are various philosophical approaches and theories describing the intimate relation people have to artifacts. In this paper, I explore the relation between two such theories, namely distributed cognition and distributed morality theory. I point out a number of similarities and differences in these views regarding the ontological status they attribute to artifacts and the larger systems they are part of. Having evaluated and compared these views, I continue by focussing on the way cognitive artifacts are used in moral practice. I specifically conceptualise how such artifacts (a) scaffold and extend moral reasoning and decision-making processes, (b) have a certain moral status which is contingent on their cognitive status, and (c) whether responsibility can be attributed to distributed systems. This paper is primarily written for those interested in the intersection of cognitive and moral theory as it relates to artifacts, but also for those independently interested in philosophical debates in extended and distributed cognition and ethics of (cognitive) technology.


Both Floridi and Verbeek argue that moral actions, either positive or negative, can be the result of interactions between humans and technology, giving artifacts a much more prominent role in ethical theory than most philosophers have. They both develop a non-anthropocentric systems approach to morality. Floridi focuses on large-scale ‘‘multiagent systems’’, whereas Verbeek focuses on small-scale ‘‘human–technology associations’’. But both attribute morality or moral agency to systems comprising of humans and technological artifacts. On their views, moral agency is thus a system property and not found exclusively in human agents. Does this mean that the artifacts and software programs involved in the process have moral agency? Neither of them attribute moral agency to the artifactual components of the larger system. It is not inconsistent to say that the human-artifact system has moral agency without saying that its artifactual components have moral agency.  Systems often have different properties than their components. The difference between Floridi and Verbeek’s approach roughly mirrors the difference between distributed and extended cognition, in that Floridi and distributed cognition theory focus on large-scale systems without central controllers, whereas Verbeek and extended cognition theory focus on small-scale systems in which agents interact with and control an informational artifact. In Floridi’s example, the technology seems semi-autonomous: the software and computer systems automatically do what they are designed to do. Presumably, the money is automatically transferred to Oxfam, implying that technology is a mere cog in a larger socio-technical system that realises positive moral outcomes. There seems to be no central controller in this system: it is therefore difficult to see it as an extended agency whose intentions are being realised.

Saturday, June 6, 2020

Motivated misremembering of selfish decisions

Carlson, R.W., Maréchal, M.A., Oud, B. et al.
Nature Communications 11, 2100 (2020).


People often prioritize their own interests, but also like to see themselves as moral. How do individuals resolve this tension? One way to both pursue personal gain and preserve a moral self-image is to misremember the extent of one’s selfishness. Here, we test this possibility. Across five experiments (N = 3190), we find that people tend to recall being more generous in the past than they actually were, even when they are incentivized to recall their decisions accurately. Crucially, this motivated misremembering effect occurs chiefly for individuals whose choices violate their own fairness standards, irrespective of how high or low those standards are. Moreover, this effect disappears under conditions where people no longer perceive themselves as responsible for their fairness violations. Together, these findings suggest that when people’s actions fall short of their personal standards, they may misremember the extent of their selfishness, thereby potentially warding off threats to their moral self-image.

From the Discussion

Specifically, these findings suggest that those who violate (as opposed to uphold) their personal standards misremember the extent of their selfishness. Moreover, they highlight the key motivational role of perceived responsibility for norm violations—consistent with classic accounts from social psychology, and recent evidence from experimental economics. However, since we focused specifically on those who reported no responsibility, it is also conceivable that other factors might have differed between the participants who felt responsible and those who did not.

We interpret these results as evidence of motivated memory distortion, however, an alternative account would hold that these individuals were aware of their true level of generosity at recall, yet were willing to pay a cost to claim having been more generous. While this account is not inconsistent with prior work, it should be less likely in a context which is anonymous, involves no future interaction with any partners, and requires memories to be verified by an experimenter. Accordingly, we found little to no effect of trait social desirability on peoples’ reported memories. Together, these points suggest that people were actually misremembering their choices, rather than consciously lying about them.

The research is here.