Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Theory. Show all posts
Showing posts with label Moral Theory. Show all posts

Saturday, September 30, 2023

Toward a Social Bioethics Through Interpretivism: A Framework for Healthcare Ethics.

Dougherty, R., & Fins, J. (2023).
Cambridge Quarterly of Healthcare Ethics, 1-11.

Abstract

Recent global events demonstrate that analytical frameworks to aid professionals in healthcare ethics must consider the pervasive role of social structures in the emergence of bioethical issues. To address this, the authors propose a new sociologically informed approach to healthcare ethics that they term “social bioethics.” Their approach is animated by the interpretive social sciences to highlight how social structures operate vis-à-vis the everyday practices and moral reasoning of individuals, a phenomenon known as social discourse. As an exemplar, the authors use social bioethics to reframe common ethical issues in psychiatric services and discuss potential implications. Lastly, the authors discuss how social bioethics illuminates the ways healthcare ethics consultants in both policy and clinical decision-making participate in and shape broader social, political, and economic systems, which then cyclically informs the design and delivery of healthcare.

My summary: 

The authors argue that traditional bioethical frameworks, which focus on individual rights and responsibilities, are not sufficient to address the complex ethical issues that arise in healthcare. They argue that social bioethics can help us to better understand how social structures, such as race, class, gender, and sexual orientation, shape the experiences of patients and healthcare providers, and how these experiences can influence ethical decision-making.

The authors use the example of psychiatric services to illustrate how social bioethics can be used to reframe common ethical issues. They argue that the way we think about mental illness is shaped by social and cultural factors, such as our understanding of what it means to be "normal" and "healthy." These factors can influence how we diagnose, treat, and care for people with mental illness.

The authors also argue that social bioethics can help us to understand the role of healthcare ethics consultants in shaping broader social, political, and economic systems. They argue that these consultants participate in a process of "social discourse," in which they help to define the terms of the debate about ethical issues in healthcare. This discourse can then have a cyclical effect on the design and delivery of healthcare.

Here are some of the key concepts of social bioethics:
  • Social structures: The systems of power and inequality that shape our society.
  • Social discourse: The process of communication and negotiation through which we define and understand social issues.
  • Healthcare ethics consultants: Professionals who help to resolve ethical dilemmas in healthcare.
  • Social justice: The fair and equitable distribution of resources and opportunities.

Wednesday, September 27, 2023

Property ownership and the legal personhood of artificial intelligence

Brown, R. D. (2020).
Information & Communications Technology Law, 
30(2), 208–234.


Abstract

This paper adds to the discussion on the legal personhood of artificial intelligence by focusing on one area not covered by previous works on the subject – ownership of property. The author discusses the nexus between property ownership and legal personhood. The paper explains the prevailing misconceptions about the requirements of rights or duties in legal personhood, and discusses the potential for conferring rights or imposing obligations on weak and strong AI. While scholars have discussed AI owning real property and copyright, there has been limited discussion on the nexus of AI property ownership and legal personhood. The paper discusses the right to own property and the obligations of property ownership in nonhumans, and applying it to AI. The paper concludes that the law may grant property ownership and legal personhood to weak AI, but not to strong AI.

From the Conclusion

This article proposes an analysis of legal personhood that focuses on rights and duties. In doing so, the article looks to property ownership, which raises both requirements. Property ownership is certainly only one type of legal right, which also includes the right to sue or be sued, or legal standing, and the right to contract.Footnote195 Property ownership, however, is a key feature of AI since it relies mainly on arguably the most valuable property today: data.

It is unlikely that governments and legislators will suddenly recognise in one event AI’s ownership of property and AI’s legal personhood. Rather, acceptance of AI’s legal personhood, as with the acceptance of a corporate personhood will develop as a process and in stages, in parallel to the development of legal personhood. At first, AI will be deemed as a tool and not have the right to own property. This is the most common conception of AI today. Second, AI will be deemed as an agent, and upon updating existing agency law to include AI as a person for purposes of agency, then AI will also be allowed to own property as an agent in the same agency ownership arrangement that Rothenberg proposes. While AI already acts as de facto agent in many circumstances today through electronic contracts, most governments and legislators have not recognised AI as an agent. The laws of many countries like Qatar still defines an agent as a person, which upon strict interpretation would not include AI or an electronic agent. This is an existing gap in the laws that will likely create legal challenges in the near future.

However, as AI develops its ability to communicate and assert more autonomy, then AI will come to own all sorts of digital assets. At first, AI will likely possess and control property in conjunction with human action and decisions. Examples would be the use of AI in money laundering, or hiding digital assets by placing them within the control and possession of an AI. In some instances, AI will have possession and control of property unknown or unforeseen by humans.

If AI is seen as separate from data, as the software that processes and interprets data for various purposes, self-learns from the data, makes autonomous decisions, and predicts human behaviour and decisions, then there could come a time when society will view AI as separate from data. Society may come to view AI not as the object (the data) but that which manipulates, controls, and possesses data and digital property.

Brief Summary:

Granting property ownership to AI is a complex one that raises a number of legal and ethical challenges. The author suggests that further research is needed to explore these challenges and to develop a framework for granting property ownership to AI in a way that is both legally sound and ethically justifiable.

Monday, September 18, 2023

Property ownership and the legal personhood of artificial intelligence

Rafael Dean Brown (2021) 
Information & Communications Technology Law, 
30:2, 208-234. 
DOI: 10.1080/13600834.2020.1861714

Abstract

This paper adds to the discussion on the legal personhood of artificial intelligence by focusing on one area not covered by previous works on the subject – ownership of property. The author discusses the nexus between property ownership and legal personhood. The paper explains the prevailing misconceptions about the requirements of rights or duties in legal personhood, and discusses the potential for conferring rights or imposing obligations on weak and strong AI. While scholars have discussed AI owning real property and copyright, there has been limited discussion on the nexus of AI property ownership and legal personhood. The paper discusses the right to own property and the obligations of property ownership in nonhumans, and applying it to AI. The paper concludes that the law may grant property ownership and legal personhood to weak AI, but not to strong AI.

(cut)

Persona ficta and juristic person

The concepts of persona ficta and juristic person, as distinct from a natural person, trace its origins to early attempts at giving legal rights to a group of men acting in concert. While the concept of persona ficta has its roots from Roman law, ecclesiastical lawyers expanded upon it during the Middle Ages. Savigny is now credited for bringing the concept into modern legal thought. A persona ficta, under Roman law principles, could not exist unless under some ‘creative act’ of a legislative body – the State. According to Deiser, however, the concept of a persona ficta during the Middle Ages was insufficient to give it the full extent of rights associated with the modern concept of legal personhood, particularly, property ownership and the recovery of property, that is, without invoking the right of an individual member. It also could not receive state-granted rights, could not occupy a definite position within a community that is distinct from its separate members, and it could not sue or be sued. In other words, persona ficta has historically required the will of the individual human member for the conferral of rights.

(cut)

In other words, weak AI, regardless of whether it is supervised or unsupervised ultimately would have to rely on some sort of intervention from its human programmer to exercise property rights. If anything, weak AI is more akin to an infant requiring guardianship, more so than a river or an idol, mainly because the weak AI functions in reliance on the human programmer’s code and data. A weak AI in possession and control of property could arguably be conferred the right to own property subject to a human agent acting on its behalf as a guardian. In this way, the law could grant a weak AI legal personhood based on its exercise of property rights in the same way that the law granted legal personhood to a corporation, river, or an idol. The law would attribute the will of the human programmer to the weak AI.

The question of whether a strong AI, if it were to become a reality, should also be granted legal personhood based on its exercise of the right to own property is altogether a different inquiry. Strong AI could theoretically take actual or constructive possession of property, and therefore exercise property rights independently the way a human would, and even in more advanced ways.Footnote151 However, a strong AI’s independence and autonomy implies that it could have the ability to assert and exercise property rights beyond the control of laws and human beings. This would be problematic to our current notions of property ownership and social order.Footnote152 In this way, the fear of a strong AI with unregulated possession of property is real, and bolsters the argument in favor of human-centred and explainable AI that requires human intervention.


My summary:

The author discusses the prevailing misconceptions about the requirements of rights or duties in legal personhood. He argues that the ability to own property is not a necessary condition for legal personhood. For example, corporations and trusts are legal persons, but they cannot own property in their own name.

The author then considers the potential for conferring rights or imposing obligations on weak and strong AI. He argues that weak AI, which is capable of limited reasoning and decision-making, may be granted property ownership and legal personhood. This is because weak AI can be held responsible for its actions and can be expected to uphold the obligations of property ownership.

Strong AI, on the other hand, is capable of independent thought and action. The author argues that it is not clear whether strong AI can be held responsible for its actions or whether it can be expected to uphold the obligations of property ownership. Therefore, he concludes that the law may not grant property ownership and legal personhood to strong AI.

The author's argument is based on the assumption that legal personhood is a necessary condition for property ownership. However, there is no consensus on this assumption. Some legal scholars argue that property ownership is a sufficient condition for legal personhood, meaning that anything that can own property is a legal person.

The question of whether AI can own property is a complex one that is likely to be debated for many years to come. The article "Property ownership and the legal personhood of artificial intelligence" provides a thoughtful and nuanced discussion of this issue.

Tuesday, April 16, 2019

Is there such a thing as moral progress?

John Danaher
Philosophical Disquisitions
Originally posted March 18, 2019

We often speak as if we believe in moral progress. We talk about recent moral changes, such as the legalisation of gay marriage, as ‘progressive’ moral changes. We express dismay at the ‘regressive’ moral views of racists and bigots. Some people (I’m looking at you Steven Pinker) have written long books that defend the idea that, although there have been setbacks, there has been a general upward trend in our moral attitudes over the course of human history. Martin Luther King once said that the arc of the moral universe is long but bend towards justice.

But does moral progress really exist? And how would we know if it did? Philosophers have puzzled over this question for some time. The problem is this. There is no doubt that there has been moral change over time, and there is no doubt that we often think of our moral views as being more advanced than those of our ancestors, but it is hard to see exactly what justifies this belief. It seems like you would need some absolute moral standard or goal against which you can measure moral change to justify that belief. Do we have such a thing?

In this post, I want offer some of my own, preliminary and underdeveloped, thoughts on the idea of moral progress. I do so by first clarifying the concept of moral progress, and then considering whether and when we can say that it exists. I will suggest that moral progress is real, and we are at least sometimes justified in saying that it has taken place. Nevertheless, there are some serious puzzles and conceptual difficulties with identifying some forms of moral progress.

The info is here.

Tuesday, December 26, 2017

When Morals Ain’t Enough: Robots, Ethics, and the Rules of the Law

Pagallo, U.
Minds & Machines (2017) 27: 625.
https://doi.org/10.1007/s11023-017-9418-5

Abstract

No single moral theory can instruct us as to whether and to what extent we are confronted with legal loopholes, e.g. whether or not new legal rules should be added to the system in the criminal law field. This question on the primary rules of the law appears crucial for today’s debate on roboethics and still, goes beyond the expertise of robo-ethicists. On the other hand, attention should be drawn to the secondary rules of the law: The unpredictability of robotic behaviour and the lack of data on the probability of events, their consequences and costs, make hard to determine the levels of risk and hence, the amount of insurance premiums and other mechanisms on which new forms of accountability for the behaviour of robots may hinge. By following Japanese thinking, the aim is to show why legally de-regulated, or special, zones for robotics, i.e. the secondary rules of the system, pave the way to understand what kind of primary rules we may want for our robots.

The article is here.

Saturday, March 5, 2016

The Definition of Morality

Gert, Bernard and Gert, Joshua
The Stanford Encyclopedia of Philosophy 
(Spring 2016 Edition), Edward N. Zalta (ed.), forthcoming

The topic of this entry is not—at least directly—moral theory; rather, it is the definition of morality. Moral theories are large and complex things; definitions are not. The question of the definition of morality is the question of identifying the target of moral theorizing. Identifying this target enables us to see different moral theories as attempting to capture the very same thing. In this way, the distinction between a definition of morality and a moral theory parallels the distinction John Rawls (1971: 9) drew between the general concept of justice and various detailed conceptions of it. Rawls’ terminology, however, suggests a psychological distinction, and also suggests that many people have conceptions of justice. But the definition/theory distinction is not psychological, and only moral theorists typically have moral theories.

There does not seem to be much reason to think that a single definition of morality will be applicable to all moral discussions. One reason for this is that “morality” seems to be used in two distinct broad senses: a descriptive sense and a normative sense. More particularly, the term “morality” can be used either

  1. descriptively to refer to certain codes of conduct put forward by a society or a group (such as a religion), or accepted by an individual for her own behavior, or

  2. normatively to refer to a code of conduct that, given specified conditions, would be put forward by all rational persons.

Which of these two senses of “morality” a theorist is using plays a crucial, although sometimes unacknowledged, role in the development of an ethical theory. If one uses “morality” in its descriptive sense, and therefore uses it to refer to codes of conduct actually put forward by distinct groups or societies, one will almost certainly deny that there is a universal morality that applies to all human beings. The descriptive use of “morality” is the one used by anthropologists when they report on the morality of the societies that they study. Recently, some comparative and evolutionary psychologists (Haidt 2006; Hauser 2006; De Waal 1996) have taken morality, or a close anticipation of it, to be present among groups of non-human animals: primarily, but not exclusively, other primates.

The entire entry is here.

Thursday, October 22, 2015

Nudging and Informed Consent

Shlomo Cohen
The American Journal of Bioethics
Volume 13, Issue 6, 2013

Abstract

Libertarian paternalism's notion of “nudging” refers to steering individual decision making so as to make choosers better off without breaching their free choice. If successful, this may offer an ideal synthesis between the duty to respect patient autonomy and that of beneficence, which at times favors paternalistic influence. A growing body of literature attempts to assess the merits of nudging in health care. However, this literature deals almost exclusively with health policy, while the question of the potential benefit of nudging for the practice of informed consent has escaped systematic analysis. This article focuses on this question. While it concedes that nudging could amount to improper exploitation of cognitive weaknesses, it defends the practice of nudging in a wide range of other conditions. The conclusion is that, when ethically legitimate, nudging offers an important new paradigm for informed consent, with a special potential to overcome the classical dilemma between paternalistic beneficence and respect for autonomy.

The entire article is here.

Thursday, December 4, 2014

Why I Am Not a Utilitarian

By Julian Savulescu
Practical Ethics Blog
Originally posted November 15 2014

Utilitarianism is a widely despised, denigrated and misunderstood moral theory.

Kant himself described it as a morality fit only for English shopkeepers. (Kant had much loftier aspirations of entering his own “noumenal” world.)

The adjective “utilitarian” now has negative connotations like “Machiavellian”. It is associated with “the end justifies the means” or using people as a mere means or failing to respect human dignity, etc.

For example, consider the following negative uses of “utilitarian.”

“Don’t be so utilitarian.”

“That is a really utilitarian way to think about it.”

To say someone is behaving in a utilitarian manner is to say something derogatory about their behaviour.

The entire article is here.