Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Status. Show all posts
Showing posts with label Moral Status. Show all posts

Tuesday, April 23, 2024

Machines and Morality

Seth Lazar
The New York Times
Originally posted 19 June 23

Here is an excerpt:

I’ve based my philosophical work on the belief, inspired by Immanuel Kant, that humans have a special moral status — that we command respect regardless of whatever value we contribute to the world. Drawing on the work of the 20th-century political philosopher John Rawls, I’ve assumed that human moral status derives from our rational autonomy. This autonomy has two parts: first, our ability to decide on goals and commit to them; second, our possession of a sense of justice and the ability to resist norms imposed by others if they seem unjust.

Existing chatbots are incapable of this kind of integrity, commitment and resistance. But Bing’s unhinged debut suggests that, in principle, it will soon be possible to design a chatbot that at least behaves like it has the kind of autonomy described by Rawls. Every large language model optimizes for a particular set of values, written into its “developer message,” or “metaprompt,” which shapes how it responds to text input by a user. These metaprompts display a remarkable ability to affect a bot’s behavior. We could write a metaprompt that inscribes a set of values, but then emphasizes that the bot should critically examine them and revise or resist them if it sees fit. We can invest a bot with long-term memory that allows it to functionally perform commitment and integrity. And large language models are already impressively capable of parsing and responding to moral reasons. Researchers are already developing software that simulates human behavior and has some of these properties.

If the Rawlsian ability to revise and pursue goals and to recognize and resist unjust norms is sufficient for moral status, then we’re much closer than I thought to building chatbots that meet this standard. That means one of two things: either we should start thinking about “robot rights,” or we should deny that rational autonomy is sufficient for moral standing. I think we should take the second path. What else does moral standing require? I believe it’s consciousness.


Here are some thoughts:

This article explores the philosophical implications of large language models, particularly in the context of their ability to mimic human conversation and behavior. The author argues that while these models may appear autonomous, they lack the key quality of self-consciousness that is necessary for moral status. This distinction, the author argues, is crucial for determining how we should interact with and develop these technologies in the future.

This lack of self-consciousness, the author argues, means that large language models cannot truly be said to have their own goals or commitments, nor can they experience the world in a way that grounds their actions in a sense of self. As such, the author concludes that these models, despite their impressive capabilities, do not possess moral status and therefore cannot be considered deserving of the same rights or respect as humans.

The article concludes by suggesting that instead of focusing on the possibility of "robot rights," we should instead focus on understanding what truly makes humans worthy of moral respect. The author argues that it is self-consciousness, rather than simply simulated autonomy, that grounds our moral standing and allows us to govern ourselves and make meaningful choices about how to live our lives.

Sunday, October 8, 2023

Moral Uncertainty and Our Relationships with Unknown Minds

Danaher, J. (2023). 
Cambridge Quarterly of Healthcare Ethics, 
32(4), 482-495.
doi:10.1017/S0963180123000191

Abstract

We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral decision rules that allow us to either minimize the risks of moral wrongdoing or improve the choice-worthiness of our actions. One particular argument adopted in this literature is the “risk asymmetry argument,” which claims that the risks associated with accepting or rejecting some moral facts may be sufficiently asymmetrical as to warrant favoring a particular practical resolution of this uncertainty. Focusing on the case study of artificial beings, this article argues that this is best understood as an ethical-epistemic challenge. The article argues that taking potential risk asymmetries seriously can help resolve disputes about the status of human–AI relationships, at least in practical terms (philosophical debates will, no doubt, continue); however, the resolution depends on a proper, empirically grounded assessment of the risks involved. Being skeptical about basic moral status, but more open to the possibility of meaningful relationships with such entities, may be the most sensible approach to take.


My take: 

John Danaher explores the ethical challenges of interacting with entities whose moral status is uncertain, such as artificial beings, animals, and patients with locked-in syndrome. Danaher argues that this is best understood as an ethical-epistemic challenge, and that we need to develop meta-moral decision rules that allow us to minimize the risks of moral wrongdoing or improve the choiceworthiness of our actions.

One particular argument that Danaher adopts is the "risk asymmetry argument," which claims that the risks associated with accepting or rejecting some moral facts may be sufficiently asymmetrical as to warrant favoring a particular practical resolution of this uncertainty. In the context of human-AI relationships, Danaher argues that it is more prudent to err on the side of caution and treat AI systems as if they have moral standing, even if we are not sure whether they actually do. This is because the potential risks of mistreating AI systems, such as creating social unrest or sparking an arms race, are much greater than the potential risks of treating them too respectfully.

Danaher acknowledges that this approach may create some tension in our moral views, as it suggests that we should be skeptical about the basic moral status of AI systems, but more open to the possibility of meaningful relationships with them. However, he argues that this is the most sensible approach to take, given the ethical-epistemic challenges that we face.

Sunday, October 9, 2022

A Normative Approach to Artificial Moral Agency

Behdadi, D., Munthe, C.
Minds & Machines 30, 195–218 (2020).
https://doi.org/10.1007/s11023-020-09525-8

Abstract

This paper proposes a methodological redirection of the philosophical debate on artificial moral agency (AMA) in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and responsibility of participants. The proposal is backed up by an analysis of the AMA debate, which is found to be overly caught in the opposition between so-called standard and functionalist conceptions of moral agency, conceptually confused and practically inert. Additionally, we outline some main themes of research in need of attention in light of the suggested normative approach to AMA.

Free will and Autonomy

Several AMA debaters have claimed that free will is necessary for being a moral agent (Himma 2009; Hellström 2012; Friedman and Kahn 1992). Others make a similar (and perhaps related) claim that autonomy is necessary (Lin et al. 2008; Schulzke 2013). In the AMA debate, some argue that artificial entities can never have free will (Bringsjord 1992; Shen 2011; Bringsjord 2007) while others, like James Moor (2006, 2009), are open to the possibility that future machines might acquire free will.Footnote15 Others (Powers 2006; Tonkens 2009) have proposed that the plausibility of a free will condition on moral agency may vary depending on what type of normative ethical theory is assumed, but they have not developed this idea further.

Despite appealing to the concept of free will, this portion of the AMA debate does not engage with key problems in the free will literature, such as the debate about compatibilism and incompatibilism (O’Connor 2016). Those in the AMA debate assume the existence of free will among humans, and ask whether artificial entities can satisfy a source control condition (McKenna et al. 2015). That is, the question is whether or not such entities can be the origins of their actions in a way that allows them to control what they do in the sense assumed of human moral agents.

An exception to this framing of the free will topic in the AMA debate occurs when Johnson writes that ‘… the non-deterministic character of human behavior makes it somewhat mysterious, but it is only because of this mysterious, non-deterministic aspect of moral agency that morality and accountability are coherent’ (Johnson 2006 p. 200). This is a line of reasoning that seems to assume an incompatibilist and libertarian sense of free will, assuming both that it is needed for moral agency and that humans do possess it. This, of course, makes the notion of human moral agents vulnerable to standard objections in the general free will debate (Shaw et al. 2019). Additionally, we note that Johnson’s idea about the presence of a ‘mysterious aspect’ of human moral agents might allow for AMA in the same way as Dreyfus and Hubert’s reference to the subconscious: artificial entities may be built to incorporate this aspect.

The question of sourcehood in the AMA debate connects to the independence argument: For instance, when it is claimed that machines are created for a purpose and therefore are nothing more than advanced tools (Powers 2006; Bryson 2010; Gladden 2016) or prosthetics (Johnson and Miller 2008), this is thought to imply that machines can never be the true or genuine source of their own actions. This argument questions whether the independence required for moral agency (by both functionalists and standardists) can be found in a machine. If a machine’s repertoire of behaviors and responses is the result of elaborate design then it is not independent, the argument goes. Floridi and Sanders question this proposal by referring to the complexity of ‘human programming’, such as genes and arranged environmental factors (e.g. education). 

Sunday, December 19, 2021

On and beyond artifacts in moral relations: accounting for power and violence in Coeckelbergh’s social relationism

Tollon, F., Naidoo, K. 
AI & Soc (2021). 
https://doi.org/10.1007/s00146-021-01303-z

Abstract

The ubiquity of technology in our lives and its culmination in artificial intelligence raises questions about its role in our moral considerations. In this paper, we address a moral concern in relation to technological systems given their deep integration in our lives. Coeckelbergh develops a social-relational account, suggesting that it can point us toward a dynamic, historicised evaluation of moral concern. While agreeing with Coeckelbergh’s move away from grounding moral concern in the ontological properties of entities, we suggest that it problematically upholds moral relativism. We suggest that the role of power, as described by Arendt and Foucault, is significant in social relations and as curating moral possibilities. This produces a clearer picture of the relations at hand and opens up the possibility that relations may be deemed violent. Violence as such gives us some way of evaluating the morality of a social relation, moving away from Coeckelbergh’s seeming relativism while retaining his emphasis on social–historical moral precedent.

From Conclusion and implications

The role of artificial intelligence or technology more broadly in our moral landscape depends upon how this landscape is conceived. The realist theory posited by Torrance which seeks to defend the view that moral concern is grounded objectively comes up short in its capacity to function as an explanatory framework which sufficiently accounts for changing moral sensibilities. On the other hand, Coeckelbergh offers a social-relational theory which, in contrast, argues that moral concern should not rest on the properties of individual entities but on the relations between them. While this view better allows for the consideration of social–historical information about relations, it seems to imply a sort of moral relativism and its focus on how things appear makes it blind to the reality of relations. Crucially, Coeckelbergh’s account cannot make sense of the role of power to the extent that it plays out in social relations and curates moral possibilities.

By drawing on an Arendtian and Foucauldian notion power as an attempt to control a situation and assessing the ways it may function in relation to moral situations, we understand how its presence makes relations morally interesting. Not only this, but a view of power also allows us to identify certain social-relational dynamics as violent. We have described violence as a restriction of potentiality, marking the end of a power relation. As we have discussed in relation to technology, this characterisation of social-relational dynamics gives us some basis to say of certain actions or relations that they are morally permissible or impermissible. This assessment retains Coeckelbergh’s emphasis on analysing social–historical relations, while allowing for some degree of moral judgement to be made.

Monday, September 27, 2021

An African Theory of Moral Status: A Relational Alternative to Individualism and Holism.

Metz, T. (2012).
Ethic Theory Moral Prac 15, 387–402. 
https://doi.org/10.1007/s10677-011-9302-y

Abstract

The dominant conceptions of moral status in the English-speaking literature are either holist or individualist, neither of which accounts well for widespread judgments that: animals and humans both have moral status that is of the same kind but different in degree; even a severely mentally incapacitated human being has a greater moral status than an animal with identical internal properties; and a newborn infant has a greater moral status than a mid-to-late stage foetus. Holists accord no moral status to any of these beings, assigning it only to groups to which they belong, while individualists such as welfarists grant an equal moral status to humans and many animals, and Kantians accord no moral status either to animals or severely mentally incapacitated humans. I argue that an underexplored, modal-relational perspective does a better job of accounting for degrees of moral status. According to modal-relationalism, something has moral status insofar as it capable of having a certain causal or intensional connection with another being. I articulate a novel instance of modal-relationalism grounded in salient sub-Saharan moral views, roughly according to which the greater a being's capacity to be part of a communal relationship with us, the greater its moral status. I then demonstrate that this new, African-based theory entails and plausibly explains the above judgments, among others, in a unified way.

From the end of the article:

Those deeply committed to holism and individualism, or even a combination of them, may well not be convinced by this discussion. Diehard holists will reject the idea that anything other than a group can ground moral status, while pure individualists will reject the recurrent suggestion that two beings that are internally identical (foetus v neonate, severely mentally incapacitated human v animal) could differ in their moral status. However, my aim has not been to convince anyone to change her mind, or even to provide a complete justification for doing so. My goals have instead been the more limited ones of articulating a new, modal-relational account of moral status grounded in sub-Saharan moral philosophy, demonstrating that it avoids the severe parochialism facing existing relational accounts, and showing that it accounts better than standard Western theories for a variety of widely shared intuitions about what has moral status and to what degree. Many of these intuitions are captured by neither holism nor individualism and have lacked a firm philosophical foundation up to now. Of importance here is the African theory’s promise to underwrite the ideas that humans and animals have a moral status grounded in the same property that differs in degree, that severely mentally incapacitated humans have a greater moral status than animals with the same internal properties, and that a human’s moral status increases as it develops from the embryonic to foetal to neo-natal stages.

Tuesday, June 29, 2021

What Matters for Moral Status: Behavioural or Cognitive Equivalence?

John Danaher
Cambridge Quarterly Review of Healthcare Ethics
2021 Jul;30(3):472-478.

Abstract

Henry Shevlin’s paper—“How could we know when a robot was a moral patient?” – argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the “behavioral equivalence” strategy that I have defended in previous work but argues that it is flawed in crucial respects. Unfortunately—and I guess this is hardly surprising—I cannot bring myself to agree that the cognitive equivalence strategy is the superior one. In this article, I try to explain why in three steps. First, I clarify the nature of the question that I take both myself and Shevlin to be answering. Second, I clear up some potential confusions about the behavioral equivalence strategy, addressing some other recent criticisms of it. Third, I will explain why I still favor the behavioral equivalence strategy over the cognitive equivalence one.

(cut)

The second problem is more fundamental and may get to the heart of the disagreement between myself and Shevlin. The problem is that Shevlin seems to think that behavioural evidence and cognitive evidence are separable. I do not think that they are. After all, cognitive architectures do not speak for themselves. They speak through behaviour. The human cognitive architecture, for example, is not that differentiated at a biological level, particularly at the cortical level. You would be hard pressed to work out the cognitive function of different brain regions just by staring at MRI scans and microscopic slices of neural tissue. You need behavioural evidence to tell you what the cognitive architecture does.  This is what has happened repeatedly in the history of neuro- and cognitive science. So, for example, we find that people with damage to particular regions of the brain exhibit some odd behaviours (lack of long term memory formation; irritability and impulsiveness; language deficits; and so on). We then use this behavioural evidence to build up a functional map of the cognitive architecture. If the map is detailed enough, someone might be able to infer certain psychological or mental states from patterns of activity in the cognitive architecture, but this is only because we first used behaviour to build up the functional map.

Sunday, December 23, 2018

Fresh urgency in mapping out ethics of brain organoid research

Julian Koplin and Julian Savulescu
The Conversation
Originally published November 20, 2018

Here is an excerpt:

But brain organoid research also raises serious ethical questions. The main concern is that brain organoids could one day attain consciousness – an issue that has just been brought to the fore by a new scientific breakthrough.

Researchers from the University of California, San Diego, recently published the creation of brain organoids that spontaneously produce brain waves resembling those found in premature infants. Although this electrical activity does not necessarily mean these organoids are conscious, it does show that we need to think through the ethics sooner rather than later.

Regulatory gaps

Stem cell research is already subject to careful regulation. However, existing regulatory frameworks have not yet caught up with the unique set of ethical concerns associated with brain organoids.

Guidelines like the National Health and Medical Research Council’s National Statement on Ethical Conduct in Human Research protect the interests of those who donate human biological material to research (and also address a host of other issues). But they do not consider whether brain organoids themselves could acquire morally relevant interests.

This gap has not gone unnoticed. A growing number of commentators argue that brain organoid research should face restrictions beyond those that apply to stem cell research more generally. Unfortunately, little progress has been made on identifying what form these restrictions should take.

The info is here.

Sunday, March 24, 2013

The Grounds of Moral Status

Stanford Encyclopedia of Philosophy
First published on March 14, 2013

An entity has moral status if and only if it or its interests morally matter to some degree for the entity's own sake, such that it can be wronged. For instance, an animal may be said to have moral status if its suffering is at least somewhat morally bad, on account of this animal itself and regardless of the consequences for other beings, and acting unjustifiably against its interests is not only wrong, but wrongs the animal. Others owe it to the animal to avoid acting in this way. Some philosophers think of moral status as coming in degrees, reserving the notion of full moral status (FMS) for the highest degree of status.

Sometimes the term “moral standing” rather than “moral status” is used, but typically these terms have the same meaning. Some philosophers employ the language of “moral considerability” but this term is extremely ambiguous. Some use it as an alternate expression for “moral status” which is understood to come in degrees. In other cases the phrase is used to mean FMS. Act Utilitarians employ yet a third notion of moral considerability, which is a matter of having one's interests (e.g., the intensity, duration, etc. of one's pleasure or pain) factored into the calculus to determine which action minimizes the bad and maximizes the good. To avoid these ambiguities, this entry will use the terminology of “moral status” and “FMS.” 

After reviewing which entities have been thought to have moral status and what is involved in having FMS, as opposed to a lesser degree of moral status, this article will survey different views of the grounds of moral status as well as the arguments for attributing a particular degree of moral status on the basis of those grounds.

The entire article is here.