Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Applied Ethics. Show all posts
Showing posts with label Applied Ethics. Show all posts

Sunday, November 5, 2023

Is Applied Ethics Morally Problematic?

Franz, D.J.
J Acad Ethics 20, 359–374 (2022).
https://doi.org/10.1007/s10805-021-09417-1

Abstract

This paper argues that applied ethics can itself be morally problematic. As illustrated by the case of Peter Singer’s criticism of social practice, morally loaded communication by applied ethicists can lead to protests, backlashes, and aggression. By reviewing the psychological literature on self-image, collective identity, and motivated reasoning three categories of morally problematic consequences of ethical criticism by applied ethicists are identified: serious psychological discomfort, moral backfiring, and hostile conflict. The most worrisome is moral backfiring: psychological research suggests that ethical criticism of people’s central moral convictions can reinforce exactly those attitudes. Therefore, applied ethicists unintentionally can contribute to a consolidation of precisely those social circumstances that they condemn to be unethical. Furthermore, I argue that the normative concerns raised in this paper are not dependent on the commitment to one specific paradigm in moral philosophy. Utilitarianism, Aristotelian virtue ethics, and Rawlsian contractarianism all provide sound reasons to take morally problematic consequences of ethical criticism seriously. Only the case of deontological ethics is less clear-cut. Finally, I point out that the issues raised in this paper provide an excellent opportunity for further interdisciplinary collaboration between applied ethics and social sciences. I also propose strategies for communicating ethics effectively.


Here is my summary:

First, ethical criticism can cause serious psychological discomfort. People often have strong emotional attachments to their moral convictions, and being told that their beliefs are wrong can be very upsetting. In some cases, ethical criticism can even lead to anxiety, depression, and other mental health problems.

Second, ethical criticism can lead to moral backfiring. This is when people respond to ethical criticism by doubling down on their existing beliefs. Moral backfiring is thought to be caused by a number of factors, including motivated reasoning and the need to maintain a positive self-image.

Third, ethical criticism can lead to hostile conflict. When people feel threatened by ethical criticism, they may become defensive and aggressive. This can lead to heated arguments, social isolation, and even violence.

Franz argues that these negative consequences are not just hypothetical. He points to a number of real-world examples, such as the backlash against Peter Singer's arguments for vegetarianism.

The author concludes by arguing that applied ethicists should be aware of the ethical dimension of their own work. They should be mindful of the potential for their work to cause harm, and they should take steps to mitigate these risks. For example, applied ethicists should be careful to avoid making personal attacks on those who disagree with them. They should also be willing to engage in respectful dialogue with those who have different moral views.

Thursday, February 9, 2023

To Each Technology Its Own Ethics: The Problem of Ethical Proliferation

Sætra, H.S., Danaher, J. 
Philos. Technol. 35, 93 (2022).
https://doi.org/10.1007/s13347-022-00591-7

Abstract

Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and every subtype of technology or technological property—e.g. computer ethics, AI ethics, data ethics, information ethics, robot ethics, and machine ethics. Specific technologies might have specific impacts, but we argue that they are often sufficiently covered and understood through already established higher-level domains of ethics. Furthermore, the proliferation of tech ethics is problematic because (a) the conceptual boundaries between the subfields are not well-defined, (b) it leads to a duplication of effort and constant reinventing the wheel, and (c) there is danger that participants overlook or ignore more fundamental ethical insights and truths. The key to avoiding such outcomes lies in a taking the discipline of ethics seriously, and we consequently begin with a brief description of what ethics is, before presenting the main forms of technology related ethics. Through this process, we develop a hierarchy of technology ethics, which can be used by developers and engineers, researchers, or regulators who seek an understanding of the ethical implications of technology. We close by deducing two principles for positioning ethical analysis which will, in combination with the hierarchy, promote the leveraging of existing knowledge and help us to avoid an exaggerated proliferation of tech ethics.

From the Conclusion

The ethics of technology is garnering attention for a reason. Just about everything in modern society is the result of, and often even infused with, some kind of technology. The ethical implications are plentiful, but how should the study of applied tech ethics be organised? We have reviewed a number of specific tech ethics, and argued that there is much overlap, and much confusion relating to the demarcation of different domain ethics. For example, many issues covered by AI ethics are arguably already covered by computer ethics, and many issues argued to be data ethics, particularly issues related to privacy and surveillance, have been studied by other tech ethicists and non-tech ethicists for a long time.

We have proposed two simple principles that should help guide more ethical research to the higher levels of tech ethics, while still allowing for the existence of lower-level domain specific ethics. If this is achieved, we avoid confusion and a lack of navigability in tech ethics, ethicists avoid reinventing the wheel, and we will be better able to make use of existing insight from higher-level ethics. At the same time, the work done in lower-level ethics will be both valid and highly important, because it will be focused on issues exclusive to that domain. For example, robot ethics will be about those questions that only arise when AI is embodied in a particular sense, and not all issues related to the moral status of machines or social AI in general.

While our argument might initially be taken as a call to arms against more than one fundamental applied ethics, we hope to have allayed such fears. There are valid arguments for the existence of different types of applied ethics, and we merely argue that an exaggerated proliferation of tech ethics is occurring, and that it has negative consequences. Furthermore, we must emphasise that there is nothing preventing anyone from making specific guidelines for, for example, AI professionals, based on insight from computer ethics. The domains of ethics and the needs of practitioners are not the same, and our argument is consequently that ethical research should be more concentrated than professional practice.

Tuesday, April 23, 2019

How big tech designs its own rules of ethics to avoid scrutiny and accountability

David Watts
theconversaton.com
Originally posted March 28, 2019

Here is an excerpt:

“Applied ethics” aims to bring the principles of ethics to bear on real-life situations. There are numerous examples.

Public sector ethics are governed by law. There are consequences for those who breach them, including disciplinary measures, termination of employment and sometimes criminal penalties. To become a lawyer, I had to provide evidence to a court that I am a “fit and proper person”. To continue to practice, I’m required to comply with detailed requirements set out in the Australian Solicitors Conduct Rules. If I breach them, there are consequences.

The features of applied ethics are that they are specific, there are feedback loops, guidance is available, they are embedded in organisational and professional culture, there is proper oversight, there are consequences when they are breached and there are independent enforcement mechanisms and real remedies. They are part of a regulatory apparatus and not just “feel good” statements.

Feelgood, high-level data ethics principles are not fit for the purpose of regulating big tech. Applied ethics may have a role to play but because they are occupation or discipline specific they cannot be relied on to do all, or even most of, the heavy lifting.

The info is here.

Tuesday, October 16, 2018

Let's Talk About AI Ethics; We're On A Deadline

Tom Vander Ark
Forbes.com
Originally posted September 13, 2018

Here is an excerpt:

Creating Values-Aligned AI

“The project of creating value-aligned AI is perhaps one of the most important things we will ever do,” said the Future of Life Institute. It’s not just about useful intelligence but “the ends to which intelligence is aimed and the social/political context, rules, and policies in and through which this all happens.”

The Institute created a visual map of interdisciplinary issues to be addressed:

  • Validation: ensuring that the right system specification is provided for the core of the agent given stakeholders' goals for the system.
  • Security: applying cyber security paradigms and techniques to AI-specific challenges.
  • Control: structural methods for operators to maintain control over advanced agents.
  • Foundations: foundational mathematical or philosophical problems that have bearing on multiple facets of safety
  • Verification: techniques that help prove a system was implemented correctly given a formal specification
  • Ethics: effort to understand what we ought to do and what counts as moral or good.
  • Governance: the norms and values held by society, which are structured through various formal and informal processes of decision-making to ensure accountability, stability, broad participation, and the rule of law.

Thursday, April 14, 2016

The Ethics of Doing Ethics

Sven Ove Hansson
Sci Eng Ethics
DOI 10.1007/s11948-016-9772-3

Abstract

Ethicists have investigated ethical problems in other disciplines, but there has not been much discussion of the ethics of their own activities. Research in ethics has many ethical problems in common with other areas of research, and it also has problems of its own. The researcher’s integrity is more precarious than in most other disciplines, and therefore even stronger procedural checks are needed to protect it. The promotion of some standpoints in ethical issues may be socially harmful, and even our decisions as to which issues we label as ‘‘ethical’’ may have unintended and potentially harmful social consequences. It can be argued that ethicists have an obligation to make positive contributions to society, but the practical implications of such an obligation are not easily identified. This article provides an overview of ethical issues that arise in research into ethics and in the application of such research. It ends with a list of ten practical proposals for how these issues should be dealt with.

The article is here.

Wednesday, September 18, 2013

The moral behavior of ethics professors: Relationships among self-reported behavior, expressed normative attitude, and directly observed behavior

Eric Schwitzgebel & Joshua Rust
Philosophical Psychology
DOI:10.1080/09515089.2012.727135

Abstract

Do philosophy professors specializing in ethics behave, on average, any morally better than do other professors? If not, do they at least behave more consistently with their expressed values? These questions have never been systematically studied. We examine the self-reported moral attitudes and moral behavior of 198 ethics professors, 208 non-ethicist philosophers, and 167 professors in departments other than philosophy on eight moral issues: academic society membership, voting, staying in touch with one's mother, vegetarianism, organ and blood donation, responsiveness to student emails, charitable giving, and honesty in responding to survey questionnaires. On some issues, we also had direct behavioral measures that we could compare with the self-reports. Ethicists expressed somewhat more stringent normative attitudes on some issues, such as vegetarianism and charitable donation. However, on no issue did ethicists show unequivocally better behavior than the two comparison groups. Our findings on attitude-behavior consistency were mixed: ethicists showed the strongest relationship between behavior and expressed moral attitude regarding voting but the weakest regarding charitable donation. We discuss implications for several models of the relationship between philosophical reflection and real-world moral behavior.

The article is here, hiding behind a paywall.