Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Common Good. Show all posts
Showing posts with label Common Good. Show all posts

Saturday, October 2, 2021

We’ve Never Protected the Vulnerable

Aaron Carroll
The Atlantic
Originally posted 5 Sept 21

Here is an excerpt:

The Americans With Disabilities Act provides for some accommodations for people with disabilities or diseases in certain situations, but those are extremely limited. They also apply only to the afflicted. My friend’s wife, a teacher, couldn’t tell her school that she needed special treatment because someone was vulnerable in her life. The school implemented no precautions to reduce her chance of being exposed to illness and getting sick, in order to keep her husband safe at home. Neither could his kids demand changes at their schools. Asking schools to alter their behavior to protect relatives of students may seem like a big ask, but I couldn’t even persuade all of our close friends to get vaccinated against the flu to protect him.

COVID-19 has exposed these gaps in our public solidarity, not caused them. The way we handle influenza is the best example, as the infectious disease that usually causes the highest number of deaths each year. Even though the young and the old are at real risk from flu, along with the immunocompromised, we’ve almost never engaged in any special protections for these groups. I’ve begged people for years to get immunized to protect others, and most don’t listen. Other countries mask more during respiratory-virus seasons; almost no one even thinks of masking here. Few distance from others, even though that’s a more palatable option for most Americans. To the contrary, many people consider it a mark of pride to “tough it out” and come to work while sick, potentially exposing others.

Our current situation with COVID-19 is especially difficult because so many Americans believe they’ve already given more than enough. Any further adjustments to their life, even if they seem small, feel like too much to bear. It’s natural that Americans want to get back to normal, and I’m not arguing that we should lock down until no risk remains. I’m asking that we think about others more in specific settings. We don’t all have to wear a mask all the time, but we could get used to always carrying one. That way, if we are around people who might live with others at high risk, we could mask around them and stand a little farther away. We could cancel our evening plans or miss a concert if we’re sick. Is it really that hard to get a flu shot every year?

Thursday, January 2, 2020

The Tricky Ethics of Google's Project Nightingale Effort

Cason Schmit
nextgov.com
Originally posted 3 Dec 19

The nation’s second-largest health system, Ascension, has agreed to allow the software behemoth Google access to tens of millions of patient records. The partnership, called Project Nightingale, aims to improve how information is used for patient care. Specifically, Ascension and Google are trying to build tools, including artificial intelligence and machine learning, “to make health records more useful, more accessible and more searchable” for doctors.

Ascension did not announce the partnership: The Wall Street Journal first reported it.

Patients and doctors have raised privacy concerns about the plan. Lack of notice to doctors and consent from patients are the primary concerns.

As a public health lawyer, I study the legal and ethical basis for using data to promote public health. Information can be used to identify health threats, understand how diseases spread and decide how to spend resources. But it’s more complicated than that.

The law deals with what can be done with data; this piece focuses on ethics, which asks what should be done.

Beyond Hippocrates

Big-data projects like this one should always be ethically scrutinized. However, data ethics debates are often narrowly focused on consent issues.

In fact, ethical determinations require balancing different, and sometimes competing, ethical principles. Sometimes it might be ethical to collect and use highly sensitive information without getting an individual’s consent.

The info is here.

Sunday, August 4, 2019

First Steps Towards an Ethics of Robots and Artificial Intelligence

John Tasioulas
King's College London

Abstract

This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to exploring some illustrative issues arising under each rubric, the article also emphasizes a number of more general themes. These include: the multiplicity of interacting levels on which ethical questions about RAIs arise, the need to recognize that RAIs potentially implicate the full gamut of human values (rather than exclusively or primarily some readily identifiable sub-set of ethical or legal principles), and the need for practically salient ethical reflection on RAIs to be informed by a realistic appreciation of their existing and foreseeable capacities.

From the section: Ethical Questions: Frames and Levels

Difficult questions arise as to how best to integrate these three modes of regulating RAIs, and there is a serious worry about the tendency of industry-based codes of ethics to upstage democratically enacted law in this domain, especially given the considerable political clout wielded by the small number of technology companies that are driving RAI-related developments. However, this very clout creates the ever-present danger that powerful corporations may be able to shape any resulting laws in ways favourable to their interests rather than the common good (Nemitz 2018, 7). Part of the difficulty here stems from the fact that three levels of ethical regulation inter-relate in complex ways. For example, it may be that there are strong moral reasons against adults creating or using a robot as a sexual partner (third level). But, out of respect for their individual autonomy, they should be legally free to do so (first level). However, there may also be good reasons to cultivate a social morality that generally frowns upon such activities (second level), so that the sale and public display of sex robots is legally constrained in various ways (through zoning laws, taxation, age and advertising restrictions, etc.) akin to the legal restrictions on cigarettes or gambling (first level, again). Given this complexity, there is no a priori assurance of a single best way of integrating the three levels of regulation, although there will nonetheless be an imperative to converge on some universal standards at the first and second levels where the matter being addressed demands a uniform solution across different national jurisdictional boundaries.

The paper is here.

Friday, December 7, 2018

Neuroexistentialism: A New Search for Meaning

Owen Flanagan and Gregg D. Caruso
The Philosopher's Magazine
Originally published November 6, 2018

Existentialisms are responses to recognisable diminishments in the self-image of persons caused by social or political rearrangements or ruptures, and they typically involve two steps: (a) admission of the anxiety and an analysis of its causes, and (b) some sort of attempt to regain a positive, less anguished, more hopeful image of persons. With regard to the first step, existentialisms typically involve a philosophical expression of the anxiety that there are no deep, satisfying answers that make sense of the human predicament and explain what makes human life meaningful, and thus that there are no secure foundations for meaning, morals, and purpose. There are three kinds of existentialisms that respond to three different kinds of grounding projects – grounding in God’s nature, in a shared vision of the collective good, or in science. The first-wave existentialism of Kierkegaard, Dostoevsky, and Nietzsche expressed anxiety about the idea that meaning and morals are made secure because of God’s omniscience and good will. The second-wave existentialism of Sartre, Camus, and de Beauvoir was a post-Holocaust response to the idea that some uplifting secular vision of the common good might serve as a foundation. Today, there is a third-wave existentialism, neuroexistentialism, which expresses the anxiety that, even as science yields the truth about human nature, it also disenchants.

Unlike the previous two waves of existentialism, neuroexistentialism is not caused by a problem with ecclesiastical authority, nor by the shock of coming face to face with the moral horror of nation state actors and their citizens. Rather, neuroexistentialism is caused by the rise of the scientific authority of the human sciences and a resultant clash between the scientific and humanistic image of persons. Neuroexistentialism is a twenty-first-century anxiety over the way contemporary neuroscience helps secure in a particularly vivid way the message of Darwin from 150 years ago: that humans are animals – not half animal, not some percentage animal, not just above the animals, but 100 percent animal. Everyday and in every way, neuroscience removes the last vestiges of an immaterial soul or self. It has no need for such posits. It also suggest that the mind is the brain and all mental processes just are (or are realised in) neural processes, that introspection is a poor instrument for revealing how the mind works, that there is no ghost in the machine or Cartesian theatre where consciousness comes together, that death is the end since when the brain ceases to function so too does consciousness, and that our sense of self may in part be an illusion.

The info is here.