Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Duties. Show all posts
Showing posts with label Duties. Show all posts

Thursday, August 11, 2022

Can you really do more than what duty requires?

Roger Crisp
The New Statesman
Originally posted 8 JUN 22

Here is an excerpt:

Since supererogation involves the paradox of accepting moral duties that do not require one to do what is morally best, why do we continue to find the idea so compelling?

One reason might be that we think that without supererogation the dictates of morality would be unacceptably demanding. If each of us has a genuine duty to benefit others as much as we can, then, given the vast number of individuals in serious need, most of the better-off would be required to make major sacrifices to live a virtuous life. Supererogation puts a limit on such requirements.

The idea that we can go beyond our duty in a praiseworthy way may be attractive, then, because we need to balance morality with self-interest. Here we ought to remember that each of us reasonably attaches a certain amount of importance to how our own lives go. So, each of us has reason to advance our own happiness independent of our duty to benefit others (which is why we describe some cases of helping others as a “sacrifice”). The need to strike a balance between our moral duties and our self-interest may explain why the notion of supererogation is so appealing.

But this doesn’t get us out of Sidgwick’s paradox: anyone who knows the morally best thing to do, but consciously decides not to do it, seems morally “lazy”.

Given the current state of the world, this means that morality is much more demanding than we typically think. Many of us should be doing a great deal more to alleviate the suffering of others, and doing this may cost us not only resources, but to some extent our own happiness or well-being.

In making donations to help strangers, we must ask when our reasons to keeping resources for ourselves are outweighed by reasons of beneficence. Under a more demanding view of morality, I should donate the money I could use to upgrade my TV to a charity that can save someone’s sight. Similarly, if the billionaire class could eradicate world poverty by donating 50 per cent of their wealth to development agencies, then they should do so immediately.

This may sound austere to our contemporary ears, but the Ancient Greeks and their philosophers thought morality could be rather demanding, and yet they never even considered the idea that duty was something you could go beyond. According to them, there are right things to do, and we should do them, making us virtuous and praiseworthy. And if we don’t, we are acting wrongly, we deserve blame, and we should feel guilty and ashamed.

It’s plausible to think that, once our health and wealth have reached certain thresholds, the things that really matter for our well-being – friendship, family, meaningful activities, and so on – are largely independent of our financial position. So, making much bigger sacrifices than we currently do may not be nearly as difficult or demanding as we tend to think.


Editor's note: For psychologists, supererogatory actions may include political advocacy for greater access to care, pro bono treatment for underserved populations, and volunteering on state and national association committees.

Friday, May 21, 2021

In search of the moral status of AI: why sentience is a strong argument

Gibert, M., Martin, D. 
AI & Soc (2021). 
https://doi.org/10.1007/s00146-021-01179-z

Abstract

Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence (AI) system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the particular argument, which leads us to move to a different one. We leave the idea of indirect duties aside since these duties do not imply considering an AI system for its own sake. The paper rejects the relational argument and the argument from intelligence. The argument from life may lead us to grant a moral status to an AI system, but only in a weak sense. Sentience, by contrast, is a strong argument for the moral status of an AI system—based, among other things, on the Aristotelian principle of equality: that same cases should be treated in the same way. The paper points out, however, that no AI system is sentient given the current level of technological development.

Saturday, February 6, 2016

Understanding Responses to Moral Dilemmas

Deontological Inclinations, Utilitarian Inclinations, and General Action Tendencies

Bertram Gawronski, Paul Conway, Joel B. Armstrong, Rebecca Friesdorf, and Mandy Hütter
In: J. P. Forgas, L. Jussim, & P. A. M. Van Lange (Eds.). (2016). Social psychology of morality. New York, NY: Psychology Press.

Introduction

For  centuries,  societies  have  wrestled  with  the  question  of  how  to  balance  the  rights of the individual versus the greater good (see Forgas, Jussim, & Van Lange, this volume); is it acceptable to ignore a person’s rights in order to increase the overall well-being of a larger number of people? The contentious nature of this issue is reflected in many contemporary examples, including debates about whether it is legitimate to cause harm in order to protect societies against threats (e.g., shooting an abducted passenger plane to prevent a terrorist attack) and whether it is acceptable to refuse life-saving support for some people in order to protect the well-being  of  many  others  (e.g.,  refusing  the  return  of  American  citizens  who  became infected with Ebola in Africa for treatment in the US). These issues have captured the attention of social scientists, politicians, philosophers, lawmakers, and citizens alike, partly because they involve a conflict between two moral principles.

The  first  principle,  often  associated  with  the  moral  philosophy  of  Immanuel  Kant, emphasizes the irrevocable universality of rights and duties. According to the principle of deontology, the moral status of an action is derived from its consistency with context-independent norms (norm-based morality). From this perspective, violations of moral norms are unacceptable irrespective of the anticipated outcomes (e.g.,  shooting  an  abducted  passenger  plane  is  always  immoral  because it violates  the moral norm not to kill others). The second principle, often associated with the moral philosophy of John Stuart Mill, emphasizes the greater good. According to the principle of utilitarianism, the moral status of an action depends on its outcomes, more  specifically  its consequences  for  overall  well-being  (outcome-based  morality).