Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Responsibility. Show all posts
Showing posts with label Moral Responsibility. Show all posts

Thursday, March 14, 2024

A way forward for responsibility in the age of AI

Gogoshin, D.L.
Inquiry (2024)

Abstract

Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what are the goods attached to them? The debate concerning ‘machine morality’ is often hinged on whether artificial agents are or could ever be morally responsible, and it is generally taken for granted (following Matthias 2004) that if they cannot, they pose a threat to the moral responsibility system and associated goods. In this paper, I challenge this assumption by asking what the goods of this system, if any, are, and what happens to them in the face of artificially intelligent agents. I will argue that they neither introduce new problems for the moral responsibility system nor do they threaten what we really (ought to) care about. I conclude the paper with a proposal for how to secure this objective.


Here is my summary:

While AI may not possess true moral agency, it's crucial to consider how the development and use of AI can be made more responsible. The author challenges the assumption that AI's lack of moral responsibility inherently creates problems for our current system of ethics. Instead, they focus on the "goods" this system provides, such as deserving blame or praise, and how these can be upheld even with AI's presence. To achieve this, the author proposes several steps, including:
  1. Shifting the focus from AI's moral agency to the agency of those who design, build, and use it. This means holding these individuals accountable for the societal impacts of AI.
  2. Developing clear ethical guidelines for AI development and use. These guidelines should be comprehensive, addressing issues like fairness, transparency, and accountability.
  3. Creating robust oversight mechanisms. This could involve independent bodies that monitor AI development and use, and have the power to intervene when necessary.
  4. Promoting public understanding of AI. This will help people make informed decisions about how AI is used in their lives and hold developers and users accountable.

Wednesday, October 18, 2023

Responsible Agency and the Importance of Moral Audience

Jefferson, A., & Sifferd, K. 
Ethic Theory Moral Prac 26, 361–375 (2023).

Abstract

Ecological accounts of responsible agency claim that moral feedback is essential to the reasons-responsiveness of agents. In this paper, we discuss McGeer’s scaffolded reasons-responsiveness account in the light of two concerns. The first is that some agents may be less attuned to feedback from their social environment but are nevertheless morally responsible agents – for example, autistic people. The second is that moral audiences can actually work to undermine reasons-responsiveness if they espouse the wrong values. We argue that McGeer’s account can be modified to handle both problems. Once we understand the specific roles that moral feedback plays for recognizing and acting on moral reasons, we can see that autistics frequently do rely on such feedback, although it often needs to be more explicit. Furthermore, although McGeer is correct to highlight the importance of moral feedback, audience sensitivity is not all that matters to reasons-responsiveness; it needs to be tempered by a consistent application of moral rules. Agents also need to make sure that they choose their moral audiences carefully, paying special attention to receiving feedback from audiences which may be adversely affected by their actions.


Here is my take:

Responsible agency is the ability to act on the right moral reasons, even when it is difficult or costly. Moral audience is the group of people whose moral opinions we care about and respect.

According to the authors, moral audience plays a crucial role in responsible agency in two ways:
  1. It helps us to identify and internalize the right moral reasons. We learn about morality from our moral audience, and we are more likely to act on moral reasons if we know that our audience would approve of our actions.
  2. It provides us with motivation to act on moral reasons. We are more likely to do the right thing if we know that our moral audience will be disappointed in us if we don't.
The authors argue that moral audience is particularly important for responsible agency in novel contexts, where we may not have clear guidance from existing moral rules or norms. In these situations, we need to rely on our moral audience to help us to identify and act on the right moral reasons.

The authors also discuss some of the challenges that can arise when we are trying to identify and act on the right moral reasons. For example, our moral audience may have different moral views than we do, or they may be biased in some way. In these cases, we need to be able to critically evaluate our moral audience's views and make our own judgments about what is right and wrong.

Overall, the article makes a strong case for the importance of moral audience in developing and maintaining responsible agency. It is important to have a group of people whose moral opinions we care about and respect, and to be open to their feedback. This can help us to become more morally responsible agents.

Sunday, October 9, 2022

A Normative Approach to Artificial Moral Agency

Behdadi, D., Munthe, C.
Minds & Machines 30, 195–218 (2020).
https://doi.org/10.1007/s11023-020-09525-8

Abstract

This paper proposes a methodological redirection of the philosophical debate on artificial moral agency (AMA) in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and responsibility of participants. The proposal is backed up by an analysis of the AMA debate, which is found to be overly caught in the opposition between so-called standard and functionalist conceptions of moral agency, conceptually confused and practically inert. Additionally, we outline some main themes of research in need of attention in light of the suggested normative approach to AMA.

Free will and Autonomy

Several AMA debaters have claimed that free will is necessary for being a moral agent (Himma 2009; Hellström 2012; Friedman and Kahn 1992). Others make a similar (and perhaps related) claim that autonomy is necessary (Lin et al. 2008; Schulzke 2013). In the AMA debate, some argue that artificial entities can never have free will (Bringsjord 1992; Shen 2011; Bringsjord 2007) while others, like James Moor (2006, 2009), are open to the possibility that future machines might acquire free will.Footnote15 Others (Powers 2006; Tonkens 2009) have proposed that the plausibility of a free will condition on moral agency may vary depending on what type of normative ethical theory is assumed, but they have not developed this idea further.

Despite appealing to the concept of free will, this portion of the AMA debate does not engage with key problems in the free will literature, such as the debate about compatibilism and incompatibilism (O’Connor 2016). Those in the AMA debate assume the existence of free will among humans, and ask whether artificial entities can satisfy a source control condition (McKenna et al. 2015). That is, the question is whether or not such entities can be the origins of their actions in a way that allows them to control what they do in the sense assumed of human moral agents.

An exception to this framing of the free will topic in the AMA debate occurs when Johnson writes that ‘… the non-deterministic character of human behavior makes it somewhat mysterious, but it is only because of this mysterious, non-deterministic aspect of moral agency that morality and accountability are coherent’ (Johnson 2006 p. 200). This is a line of reasoning that seems to assume an incompatibilist and libertarian sense of free will, assuming both that it is needed for moral agency and that humans do possess it. This, of course, makes the notion of human moral agents vulnerable to standard objections in the general free will debate (Shaw et al. 2019). Additionally, we note that Johnson’s idea about the presence of a ‘mysterious aspect’ of human moral agents might allow for AMA in the same way as Dreyfus and Hubert’s reference to the subconscious: artificial entities may be built to incorporate this aspect.

The question of sourcehood in the AMA debate connects to the independence argument: For instance, when it is claimed that machines are created for a purpose and therefore are nothing more than advanced tools (Powers 2006; Bryson 2010; Gladden 2016) or prosthetics (Johnson and Miller 2008), this is thought to imply that machines can never be the true or genuine source of their own actions. This argument questions whether the independence required for moral agency (by both functionalists and standardists) can be found in a machine. If a machine’s repertoire of behaviors and responses is the result of elaborate design then it is not independent, the argument goes. Floridi and Sanders question this proposal by referring to the complexity of ‘human programming’, such as genes and arranged environmental factors (e.g. education). 

Monday, July 12, 2021

Workplace automation without achievement gaps: a reply to Danaher and Nyholm

Tigard, D.W. 
AI Ethics (2021). 
https://doi.org/10.1007/s43681-021-00064-1

Abstract

In a recent article in this journal, John Danaher and Sven Nyholm raise well-founded concerns that the advances in AI-based automation will threaten the values of meaningful work. In particular, they present a strong case for thinking that automation will undermine our achievements, thereby rendering our work less meaningful. It is also claimed that the threat to achievements in the workplace will open up ‘achievement gaps’—the flipside of the ‘responsibility gaps’ now commonly discussed in technology ethics. This claim, however, is far less worrisome than the general concerns for widespread automation, namely because it rests on several conceptual ambiguities. With this paper, I argue that although the threat to achievements in the workplace is problematic and calls for policy responses of the sort Danaher and Nyholm outline, when framed in terms of responsibility, there are no ‘achievement gaps’.

From the Conclusion

In closing, it is worth stopping to ask: Who exactly is the primary subject of “harm” (broadly speaking) in the supposed gap scenarios? Typically, in cases of responsibility gaps, the harm is seen as falling upon the person inclined to respond (usually with blame) and finding no one to respond to. This is often because they seek apologies or some sort of remuneration, and as we can imagine, it sets back their interests when such demands remain unfulfilled. But what about cases of achievement gaps? If we want to draw truly close analogies between the two scenarios, we would consider the subject of harm to be the person inclined to respond with praise and finding no one to praise. And perhaps there is some degree of disappointment here, but it hardly seems to be a worrisome kind of experience for that person. With this in mind, we might say there is yet another mismatch between responsibility gaps and achievement gaps. Nevertheless, on the account of Danaher and Nyholm, the harm is seen as falling upon the humans who miss out on achieving something in the workplace. But on that picture, we run into a sort of non-identity problem—for as soon as we identify the subjects of this kind of harm, we thereby affirm that it is not fitting to praise them for the workplace achievement, and so they cannot really be harmed in this way.

Saturday, May 15, 2021

Moral zombies: why algorithms are not moral agents

Véliz, C. 
AI & Soc (2021). 

Abstract

In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking about the latter can help us better understand and regulate the former. I contend that the main reason why algorithms can be neither autonomous nor accountable is that they lack sentience. Moral zombies and algorithms are incoherent as moral agents because they lack the necessary moral understanding to be morally responsible. To understand what it means to inflict pain on someone, it is necessary to have experiential knowledge of pain. At most, for an algorithm that feels nothing, ‘values’ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness. But entities that do not feel cannot value, and beings that do not value cannot act for moral reasons.

Conclusion

This paper has argued that moral zombies—creatures that behave like moral agents but lack sentience—are incoherent as moral agents. Only beings who can experience pain and pleasure can understand what it means to inflict pain or cause pleasure, and only those with this moral understanding can be moral agents. What I have dubbed ‘moral zombies’ are relevant because they are similar to algorithms in that they make moral decisions as human beings would—determining who gets which benefits and penalties—without having any concomitant sentience.

There might come a time when AI becomes so sophisticated that robots might possess desires and values of their own. It will not, however, be on account of their computational prowess, but on account of their sentience, which may in turn require some kind of embodiment. At present, we are far from creating sentient algorithms.

When algorithms cause moral havoc, as they often do, we must look to the human beings who designed, programmed, commissioned, implemented, and were supposed to supervise them to assign the appropriate blame. For all their complexity and flair, algorithms are nothing but tools, and moral agents are fully responsible for the tools they create and use.

Sunday, May 9, 2021

For Whom Does Determinism Undermine Moral Responsibility? Surveying the Conditions for Free Will Across Cultures

I. Hannikainen, et. al.
Front. Psychol., 05 November 2019
https://doi.org/10.3389/fpsyg.2019.02428

Abstract

Philosophers have long debated whether, if determinism is true, we should hold people morally responsible for their actions since in a deterministic universe, people are arguably not the ultimate source of their actions nor could they have done otherwise if initial conditions and the laws of nature are held fixed. To reveal how non-philosophers ordinarily reason about the conditions for free will, we conducted a cross-cultural and cross-linguistic survey (N = 5,268) spanning twenty countries and sixteen languages. Overall, participants tended to ascribe moral responsibility whether the perpetrator lacked sourcehood or alternate possibilities. However, for American, European, and Middle Eastern participants, being the ultimate source of one’s actions promoted perceptions of free will and control as well as ascriptions of blame and punishment. By contrast, being the source of one’s actions was not particularly salient to Asian participants. Finally, across cultures, participants exhibiting greater cognitive reflection were more likely to view free will as incompatible with causal determinism. We discuss these findings in light of documented cultural differences in the tendency toward dispositional versus situational attributions.

Discussion

At the aggregate level, we found that participants blamed and punished agents whether they only lacked alternate possibilities (Miller and Feltz, 2011) or whether they also lacked sourcehood (Nahmias et al., 2005; Nichols and Knobe, 2007). Thus, echoing early findings, laypeople did not take alternate possibilities or sourcehood as necessary conditions for free will and moral responsibility.

Yet, our study also revealed a dramatic cultural difference: Throughout the Americas, Europe, and the Middle East, participants viewed the perpetrator with sourcehood (in the CI scenario) as freer and more morally responsible than the perpetrator without sourcehood (in the AS scenario). Meanwhile, South and East Asian participants evaluated both perpetrators in a strikingly similar way. We interpreted these results in light of cultural variation in dispositional versus situational attributions (Miller, 1984; Morris and Peng, 1994; Choi et al., 1999; Chiu et al., 2000). From a dispositionist perspective, participants may be especially attuned to the absence of sourcehood: When an agent is the source of their action, people may naturally conjure dispositionist explanations that refer to her goals, desires (e.g., because “she wanted a new life”) or character (e.g., because “she is ruthless”). In contrast, when actions result from a causal chain originating at the beginning of the universe, explanations of this sort – implying sourcehood – seem particularly unsatisfactory and incomplete. In contrast, from a situationist perspective, whether the agent could be seen as the source of her action may be largely irrelevant: Instead, a situationist may think of others’ behavior as the product of extrinsic pressures – from momentary upheaval, to the way they were raised, social norms or fate – and thus perceive both agents, in the CI and AS cases, as similar in matters of free will and moral responsibility.

Saturday, February 13, 2021

Allocating moral responsibility to multiple agents

Gantman, A. P., Sternisko, A., et al.
Journal of Experimental Social Psychology
Volume 91, November 2020, 

Abstract

Moral and immoral actions often involve multiple individuals who play different roles in bringing about the outcome. For example, one agent may deliberate and decide what to do while another may plan and implement that decision. We suggest that the Mindset Theory of Action Phases provides a useful lens through which to understand these cases and the implications that these different roles, which correspond to different mindsets, have for judgments of moral responsibility. In Experiment 1, participants learned about a disastrous oil spill in which one company made decisions about a faulty oil rig, and another installed that rig. Participants judged the company who made decisions as more responsible than the company who implemented them. In Experiment 2 and a direct replication, we tested whether people judge implementers to be morally responsible at all. We examined a known asymmetry in blame and praise. Moral agents received blame for actions that resulted in a bad outcome but not praise for the same action that resulted in a good outcome. We found this asymmetry for deciders but not implementers, an indication that implementers were judged through a moral lens to a lesser extent than deciders. Implications for allocating moral responsibility across multiple agents are discussed.

Highlights

• Acts can be divided into parts and thereby roles (e.g., decider, implementer).

• Deliberating agent earns more blame than implementing one for a bad outcome.

• Asymmetry in blame vs. praise for the decider but not the implementer

• Asymmetry in blame vs. praise suggests only the decider is judged as moral agent

• Effect is attenuated if decider's job is primarily to implement.

Monday, August 24, 2020

Natural Compatibilism, Indeterminism, and Intrusive Metaphysics

Nadelhoffer, T., Rose, D., Buckwalter, W.,
& Nichols, S. (2019, August 25).
https://doi.org/10.31219/osf.io/rzbqh

Abstract

The claim that common sense regards free will and moral responsibility as compatible with determinism has played a central role in both analytic and experimental philosophy. In this paper, we show that evidence in favor of this “natural compatibilism” is undermined by the role that indeterministic metaphysical views play in how people construe deterministic scenarios. To demonstrate this, we re-examine two classic studies that have been used to support natural compatibilism. We find that although people give apparently compatibilist responses, this is largely explained by the fact that people import an indeterministic metaphysics into deterministic scenarios when making judgments about freedom and responsibility. We conclude that judgments based on these scenarios are not reliable evidence for natural compatibilism.

Here is an excerpt from the Discussion:

The most obvious rejoinder for natural compatibilists is to deny that our intrusion items are properly construed as measures of creeping indeterminism. On this view, our items beg the question against compatibilism and our findings can be given a compatibilist-friendly interpretation.Here, the natural compatibilist is likely to appeal to the difference between the unconditional and the conditional ability to do otherwise. In an indeterministic universe, agents can have the unconditional ability to do otherwise—that is, they could have done otherwise even if everything leading up to their decision remained exactly the same. In a deterministic universe, on the other hand, agents merely have the conditional ability to do otherwise—that is, agents could have acted differently only insofar as something (either the past or the laws) had been different than it actually was. Compatibilists suggest that this conditional ability to do otherwise (along with other cognitive and volitional capacities) can ground free will and moral responsibility even in a deterministic universe. Incompatibilists disagree, insisting instead that free will requires indeterminism and the unconditional ability to do otherwise.

Wednesday, July 8, 2020

A Normative Approach to Artificial Moral Agency

Behdadi, D., Munthe, C.
Minds & Machines (2020). 
https://doi.org/10.1007/s11023-020-09525-8

Abstract

This paper proposes a methodological redirection of the philosophical debate on artificial moral agency (AMA) in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and responsibility of participants. The proposal is backed up by an analysis of the AMA debate, which is found to be overly caught in the opposition between so-called standard and functionalist conceptions of moral agency, conceptually confused and practically inert. Additionally, we outline some main themes of research in need of attention in light of the suggested normative approach to AMA.

Conclusion

We have argued that to be able to contribute to pressing practical problems, the debate on AMA should be redirected to address outright normative ethical questions. Specifically, the questions of how and to what extent artificial entities should be involved in human practices where we normally assume moral agency and responsibility. The reason for our proposal is the high degree of conceptual confusion and lack of practical usefulness of the traditional AMA debate. And this reason seems especially strong in light of the current fast development and implementation of advanced, autonomous and self-evolving AI and robotic constructs.

Tuesday, January 14, 2020

Exceptionality Effect in Agency: Exceptional Choices Attributed Higher Free Will Than Routine

Fillon, A, Lantian, A., Feldman, G., & N'gbala, A.
PsyArXiv
Originally posted on 9 Nov 19

Abstract

Exceptionality effect is the widely cited phenomenon that people experience stronger regret regarding negative outcomes that are a result of more exceptional circumstances, compared to routine. We hypothesize that the exceptionality-routine attribution asymmetry would extend to attributions of freedom and responsibility. In Experiment 1 (N = 338), we found that people attributed more free will to exceptional behavior compared to routine, when the exception was due to self-choice rather than due to external circumstances. In Experiment 2 (N = 561), we replicated and generalized the effect of exceptionality on attributions of free will to other scenarios, with support for the classic exceptionality effect regarding regret and an extension to moral responsibility. In Experiment 3 (N = 128), we replicated these effects in a within-subject design. When using a classic experimental philosophy paradigm contrasting a deterministic and an indeterministic universe, we found that the results were robust across both contexts. We conclude that there is a consistent support for a link between exceptionality and free will attributions.

From the Conclusion:

Although based on different theoretical frameworks, our results on attributions of free will could be related to the findings of Bear and Knobe (2016). They found that a behavior that was performed “actively” rather than “passively” modifies people’s judgment about the compatibility of this behavior with causal determinism thesis. More concretely, people consider that a behavior performed actively (such as composing a highly technical legal document) is less possible (i.e., less compatible) in a causally deterministic universe than a behavior performed passively (such as impulsively shoplifting from a convenience store; Bear & Knobe, 2016). According to Bear and Knobe (2016), people relied on two cues to determine the active or passive feature of a behavior: mental effort and spontaneity (Bear & Knobe, 2016). By adopting this framework, we may assimilate an exceptional behavior to an active behavior (because its “breaking off from the flow of things,” and require mental effort and spontaneity) and a routine behavior to a passive effort (because it is “going with the flow,” and does not require a mental effort or spontaneity). In the same vein, an agent acting spontaneously is considered freer than an agent acting deliberately (Vierkantet al., 2019). Despite the fact that Vierkant et al. (2019) manipulated the agent’s choice (spontaneous vs. deliberate) in a within-design their study, it may suggest that when deliberation (or mental effort) and spontaneity are experimentally contrasted, it is spontaneity that seems to be the driving force behind the increase of perceived free will of the agent.

The research is here.

Tuesday, November 19, 2019

Moral Responsibility

Talbert, Matthew
The Stanford Encyclopedia of Philosophy 
(Winter 2019 Edition), Edward N. Zalta (ed.)

Making judgments about whether a person is morally responsible for her behavior, and holding others and ourselves responsible for actions and the consequences of actions, is a fundamental and familiar part of our moral practices and our interpersonal relationships.

The judgment that a person is morally responsible for her behavior involves—at least to a first approximation—attributing certain powers and capacities to that person, and viewing her behavior as arising (in the right way) from the fact that the person has, and has exercised, these powers and capacities. Whatever the correct account of the powers and capacities at issue (and canvassing different accounts is the task of this entry), their possession qualifies an agent as morally responsible in a general sense: that is, as one who may be morally responsible for particular exercises of agency. Normal adult human beings may possess the powers and capacities in question, and non-human animals, very young children, and those suffering from severe developmental disabilities or dementia (to give a few examples) are generally taken to lack them.

To hold someone responsible involves—again, to a first approximation—responding to that person in ways that are made appropriate by the judgment that she is morally responsible. These responses often constitute instances of moral praise or moral blame (though there may be reason to allow for morally responsible behavior that is neither praiseworthy nor blameworthy: see McKenna 2012: 16–17 and M. Zimmerman 1988: 61–62). Blame is a response that may follow on the judgment that a person is morally responsible for behavior that is wrong or bad, and praise is a response that may follow on the judgment that a person is morally responsible for behavior that is right or good.

The information is here.

Sunday, November 10, 2019

For whom does determinism undermine moral responsibility? Surveying the conditions for free will across cultures

Ivar Hannikainen and others
PsyArXiv Preprints
Originally published October 15, 2019

Abstract

Philosophers have long debated whether, if determinism is true, we should hold people morally responsible for their actions since in a deterministic universe, people are arguably not the ultimate source of their actions nor could they have done otherwise if initial conditions and the laws of nature are held fixed. To reveal how non-philosophers ordinarily reason about the conditions for free will, we conducted a cross-cultural and cross-linguistic survey (N = 5,268) spanning twenty countries and sixteen languages. Overall, participants tended to ascribe moral responsibility whether the perpetrator lacked sourcehood or alternate possibilities. However, for American, European, and Middle Eastern participants, being the ultimate source of one’s actions promoted perceptions of free will and control as well as ascriptions of blame and punishment. By contrast, being the source of one’s actions was not particularly salient to Asian participants. Finally, across cultures, participants exhibiting greater cognitive reflection were more likely to view free will as incompatible with causal determinism. We discuss these findings in light of documented cultural differences in the tendency toward dispositional versus situational attributions.

The research is here.

Saturday, November 9, 2019

Debunking (the) Retribution (Gap)

Steven R. Kraaijeveld
Science and Engineering Ethics
https://doi.org/10.1007/s11948-019-00148-6

Abstract

Robotization is an increasingly pervasive feature of our lives. Robots with high degrees of autonomy may cause harm, yet in sufficiently complex systems neither the robots nor the human developers may be candidates for moral blame. John Danaher has recently argued that this may lead to a retribution gap, where the human desire for retribution faces a lack of appropriate subjects for retributive blame. The potential social and moral implications of a retribution gap are considerable. I argue that the retributive intuitions that feed into retribution gaps are best understood as deontological intuitions. I apply a debunking argument for deontological intuitions in order to show that retributive intuitions cannot be used to justify retributive punishment in cases of robot harm without clear candidates for blame. The fundamental moral question thus becomes what we ought to do with these retributive intuitions, given that they do not justify retribution. I draw a parallel from recent work on implicit biases to make a case for taking moral responsibility for retributive intuitions. In the same way that we can exert some form of control over our unwanted implicit biases, we can and should do so for unjustified retributive intuitions in cases of robot harm.

Wednesday, July 17, 2019

Responsibility for Killer Robots

Johannes Himmelreich
Ethic Theory Moral Prac (2019).
https://doi.org/10.1007/s10677-019-10007-9

Abstract

Future weapons will make life-or-death decisions without a human in the loop. When such weapons inflict unwarranted harm, no one appears to be responsible. There seems to be a responsibility gap. I first reconstruct the argument for such responsibility gaps to then argue that this argument is not sound. The argument assumes that commanders have no control over whether autonomous weapons inflict harm. I argue against this assumption. Although this investigation concerns a specific case of autonomous weapons systems, I take steps towards vindicating the more general idea that superiors can be morally responsible in virtue of being in command.

Saturday, May 11, 2019

Free Will, an Illusion? An Answer from a Pragmatic Sentimentalist Point of View

Maureen Sie
Appears in : Caruso, G. (ed.), June 2013, Exploring the Illusion of Free Will and Moral Responsibility, Rowman & Littlefield.

According to some people, diverse findings in the cognitive and neurosciences suggest that free will is an illusion: We experience ourselves as agents, but in fact our brains decide, initiate, and judge before ‘we’ do (Soon, Brass, Heinze and Haynes 2008; Libet and Gleason 1983). Others have replied that the distinction between ‘us’ and ‘our brains’ makes no sense (e.g., Dennett 2003)  or that scientists misperceive the conceptual relations that hold between free will and responsibility (Roskies 2006). Many others regard the neuro-scientific findings as irrelevant to their views on free will. They do not believe that determinist processes are incompatible with free will to begin with, hence, do not understand why deterministic processes in our brain would be (see Sie and Wouters 2008, 2010). That latter response should be understood against the background of the philosophical free will discussion. In philosophy, free will is traditionally approached as a metaphysical problem, one that needs to be dealt with in order to discuss the legitimacy of our practices of responsibility. The emergence of our moral practices is seen as a result of the assumption that we possess free will (or some capacity associated with it) and the main question discussed is whether that assumption is compatible with determinism.  In this chapter we want to steer clear from this 'metaphysical' discussion.

The question we are interested in in this chapter, is whether the above mentioned scientific findings are relevant to our use of the concept of free will when that concept is approached from a different angle. We call this different angle the 'pragmatic sentimentalist'-approach to free will (hereafter the PS-approach).  This approach can be traced back to Peter F. Strawson’s influential essay “Freedom and Resentment”(Strawson 1962).  Contrary to the metaphysical approach, the PS-approach does not understand free will as a concept that somehow precedes our moral practices. Rather it is assumed that everyday talk of free will naturally arises in a practice that is characterized by certain reactive attitudes that we take towards one another. This is why it is called 'sentimentalist.' In this approach, the practical purposes of the concept of free will are put central stage. This is why it is called 'pragmatist.'

A draft of the book chapter can be downloaded here.

Friday, January 25, 2019

Decision-Making and Self-Governing Systems

Adina L. Roskies
Neuroethics
October 2018, Volume 11, Issue 3, pp 245–257

Abstract

Neuroscience has illuminated the neural basis of decision-making, providing evidence that supports specific models of decision-processes. These models typically are quite mechanical, the realization of abstract mathematical “diffusion to bound” models. While effective decision-making seems to be essential for sophisticated behavior, central to an account of freedom, and a necessary characteristic of self-governing systems, it is not clear how the simple models neuroscience inspires can underlie the notion of self-governance. Drawing from both philosophy and neuroscience I explore ways in which the proposed decision-making architectures can play a role in systems that can reasonably be thought of as “self-governing”.

Here is an excerpt:

The importance of prospection for self-governance cannot be underestimated. One example in which it promises to play an important role is in the exercise of and failures of self-control. Philosophers have long been puzzled by the apparent possibility of akrasia or weakness of will: choosing to act in ways that one judges not to be in one’s best interest. Weakness of will is thought to be an example of irrational choice. If one’s theory of choice is that one always decides to pursue the option that has the highest value, and that it is rational to choose what one most values, it is hard to explain irrational choices. Apparent cases of weakness of will would really be cases of mistaken valuation: overvaluing an option that is in fact not the most valuable option. And indeed, if one cannot rationally criticize the strength of desires (see Hume’s famous observation that “it is not against reason that I should prefer the destruction of half the world to the pricking of my little finger”), we cannot explain irrational choice.

The article is here.

Sunday, June 17, 2018

Does Non-Moral Ignorance Exculpate? Situational Awareness and Attributions of Blame and Forgiveness

Kissinger-Knox, A., Aragon, P. & Mizrahi, M.
Acta Anal (2018) 33: 161. https://doi.org/10.1007/s12136-017-0339-y

Abstract

In this paper, we set out to test empirically an idea that many philosophers find intuitive, namely that non-moral ignorance can exculpate. Many philosophers find it intuitive that moral agents are responsible only if they know the particular facts surrounding their action (or inaction). Our results show that whether moral agents are aware of the facts surrounding their (in)action does have an effect on people’s attributions of blame, regardless of the consequences or side effects of the agent’s actions. In general, it was more likely that a situationally aware agent will be blamed for failing to perform the obligatory action than a situationally unaware agent. We also tested attributions of forgiveness in addition to attributions of blame. In general, it was less likely that a situationally aware agent will be forgiven for failing to perform the obligatory action than a situationally unaware agent. When the agent is situationally unaware, it is more likely that the agent will be forgiven than blamed. We argue that these results provide some empirical support for the hypothesis that there is something intuitive about the idea that non-moral ignorance can exculpate.

The article is here.

Sunday, April 15, 2018

What If There Is No Ethical Way to Act in Syria Now?

Sigal Samel
The Atlantic
Originally posted April 13, 2018

For seven years now, America has been struggling to understand its moral responsibility in Syria. For every urgent argument to intervene against Syrian President Bashar al-Assad to stop the mass killing of civilians, there were ready responses about the risks of causing more destruction than could be averted, or even escalating to a major war with other powers in Syria. In the end, American intervention there has been tailored mostly to a narrow perception of American interests in stopping the threat of terror. But the fundamental questions are still unresolved: What exactly was the moral course of action in Syria? And more urgently, what—if any—is the moral course of action now?

The war has left roughly half a million people dead—the UN has stopped counting—but the question of moral responsibility has taken on new urgency in the wake of a suspected chemical attack over the weekend. As President Trump threatened to launch retaliatory missile strikes, I spoke about America’s ethical responsibility with some of the world’s leading moral philosophers. These are people whose job it is to ascertain the right thing to do in any given situation. All of them suggested that, years ago, America might have been able to intervene in a moral way to stop the killing in the Syrian civil war. But asked what America should do now, they all gave the same startling response: They don’t know.

The article is here.

Saturday, April 7, 2018

The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?

Sven Nyholm and Jilles Smids
Ethical Theory and Moral Practice
November 2016, Volume 19, Issue 5, pp 1275–1289

Abstract

Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100 % safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important ways in which the ethics of accident-algorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: (i) the basic decision-making situation faced by those who decide how self-driving cars should be programmed to deal with accidents; (ii) moral and legal responsibility; and (iii) decision-making in the face of risks and uncertainty. In discussing these three areas of disanalogy, we isolate and identify a number of basic issues and complexities that arise within the ethics of the programming of self-driving cars.

The article is here.

Friday, February 9, 2018

Robots, Law and the Retribution Gap

John Danaher
Ethics and Information Technology
December 2016, Volume 18, Issue 4, pp 299–309

We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises from a mismatch between the human desire for retribution and the absence of appropriate subjects of retributive blame. I argue for the potential existence of this gap in an era of increased robotisation; suggest that it is much harder to plug this gap than it is to plug those thus far explored in the literature; and then highlight three important social implications of this gap.

From the Discussion Section

Third, and finally, I have argued that this retributive gap has three potentially significant social implications: (i) it could lead to an increased risk of moral scapegoating; (ii) it could erode confidence in the rule of law; and (iii) it could present a strategic opening for those who favour nonretributive approaches to crime and punishment.

The paper is here.