Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, April 4, 2019

Confucian Ethics as Role-Based Ethics

A. T. Nuyen
International Philosophical Quarterly
Volume 47, Issue 3, September 2007, 315-328.

Abstract

For many commentators, Confucian ethics is a kind of virtue ethics. However, there is enough textual evidence to suggest that it can be interpreted as an ethics based on rules, consequentialist as well as deontological. Against these views, I argue that Confucian ethics is based on the roles that make an agent the person he or she is. Further, I argue that in Confucianism the question of what it is that a person ought to do cannot be separated from the question of what it is to be a person, and that the latter is answered in terms of the roles that arise from the network of social relationships in which a person stands. This does not mean that Confucian ethics is unlike anything found in Western philosophy. Indeed, I show that many Western thinkers have advanced a view of ethics similar to the Confucian ethics as I interpret it.

The info is here.

I’m a Journalist. Apparently, I’m Also One of America’s “Top Doctors.”

Marshall Allen
Propublica.org
Originally posted Feb. 28, 2019

Here is an excerpt:

And now, for reasons still unclear, Top Doctor Awards had chosen me — and I was almost perfectly the wrong person to pick. I’ve spent the last 13 years reporting on health care, a good chunk of it examining how our health care system measures the quality of doctors. Medicine is complex, and there’s no simple way of saying some doctors are better than others. Truly assessing the performance of doctors, from their diagnostic or surgical outcomes to the satisfaction of their patients, is challenging work. And yet, for-profit companies churn out lists of “Super” or “Top” or “Best” physicians all the time, displaying them in magazine ads, online listings or via shiny plaques or promotional videos the companies produce for an added fee.

On my call with Anne from Top Doctors, the conversation took a surreal turn.

“It says you work for a company called ProPublica,” she said, blithely. At least she had that right.

I responded that I did and that I was actually a journalist, not a doctor. Is that going to be a problem? I asked. Or can you still give me the “Top Doctor” award?

There was a pause. Clearly, I had thrown a baffling curve into her script. She quickly regrouped. “Yes,” she decided, I could have the award.

Anne’s bonus, I thought, must be volume based.

Then we got down to business. The honor came with a customized plaque, with my choice of cherry wood with gold trim or black with chrome trim. I mulled over which vibe better fit my unique brand of medicine: the more traditional cherry or the more modern black?

The info is here.

Wednesday, April 3, 2019

Artificial Morality

Robert Koehler
www.citywathcla.com
Originally posted March 21, 2019

Here is an excerpt:

What I see here is moral awakening scrambling for sociopolitical traction: Employees are standing for something larger than sheer personal interests, in the process pushing the Big Tech brass to think beyond their need for an endless flow of capital, consequences be damned.

This is happening across the country. A movement is percolating: Tech won’t build it!

“Across the technology industry,” the New York Times reported in October, “rank-and-file employees are demanding greater insight into how their companies are deploying the technology that they built. At Google, Amazon, Microsoft and Salesforce, as well as at tech start-ups, engineers and technologists are increasingly asking whether the products they are working on are being used for surveillance in places like China or for military projects in the United States or elsewhere.

“That’s a change from the past, when Silicon Valley workers typically developed products with little questioning about the social costs.”

What if moral thinking — not in books and philosophical tracts, but in the real world, both corporate and political — were as large and complex as technical thinking? It could no longer hide behind the cliché of the just war (and surely the next one we’re preparing for will be just), but would have to evaluate war itself — all wars, including the ones of the past 70 years or so, in the fullness of their costs and consequences — as well as look ahead to the kind of future we could create, depending on what decisions we make today.

Complex moral thinking doesn’t ignore the need to survive, financially and otherwise, in the present moment, but it stays calm in the face of that need and sees survival as a collective, not a competitive, enterprise.

The info is here.

Feeling Good: Integrating the Psychology and Epistemology of Moral Intuition and Emotion

Hossein Dabbagh
Journal of Cognition and Neuroethics 5 (3): 1–30.

Abstract

Is the epistemology of moral intuitions compatible with admitting a role for emotion? I argue in this paper thatmoral intuitions and emotions can be partners without creating an epistemic threat. I start off by offering some empirical findings to weaken Singer’s (and Greene’s and Haidt’s) debunking argument against moral intuition, which treat emotions as a distorting factor. In the second part of the paper, I argue that the standard contrast between intuition and emotion is a mistake. Moral intuitions and emotions are not contestants if we construe moral intuition as non-doxastic intellectual seeming and emotion as a non-doxastic perceptual-like state. This will show that emotions support, rather than distort, the epistemic standing of moral intuitions.

Here is an excerpt:

However, cognitive sciences, as I argued above, show us that seeing all emotions in this excessively pessimistic way is not plausible. To think about emotional experience as always being a source of epistemic distortion would be wrong. On the contrary, there are some reasons to believe that emotional experiences can sometimes make a positive contribution to our activities in practical rationality. So, there is a possibility that some emotions are not distorting factors. If this is right, we are no longer justified in saying that emotions always distort our epistemic activities. Instead, emotions (construed as quasiperceptual experiences) might have some cognitive elements assessable for rationality.

The paper is here.

Tuesday, April 2, 2019

Former Patient Coordinator Pleads Guilty to Wrongfully Disclosing Health Information to Cause Harm

Department of Justice
U.S. Attorney’s Office
Western District of Pennsylvania
Originally posted March 6, 2019

A resident of Butler, Pennsylvania, pleaded guilty in federal court to a charge of wrongfully disclosing the health information of another individual, United States Attorney Scott W. Brady announced today.

Linda Sue Kalina, 61, pleaded guilty to one count before United States District Judge Arthur J. Schwab.

In connection with the guilty plea, the court was advised that Linda Sue Kalina worked, from March 7, 2016 through June 23, 2017, as a Patient Information Coordinator with UPMC and its affiliate, Tri Rivers Musculoskeletal Centers (TRMC) in Mars, Pennsylvania, and that during her employment, contrary to the requirements of the Health Insurance Portability and Accountability Act (HIPAA) improperly accessed the individual health information of 111 UPMC patients who had never been provided services at TRMC. Specifically, on August 11, 2017, Kalina unlawfully disclosed personal gynecological health information related to two such patients, with the intent to cause those individuals embarrassment and mental distress.

Judge Schwab scheduled sentencing for June 25, 2019, at 10 a.m. The law provides for a total sentence of 10 years in prison, a fine of $250,000, or both. Under the Federal Sentencing Guidelines, the actual sentence imposed is based upon the seriousness of the offense and the prior criminal history, if any, of the defendant. Kalina remains on bonding pending the sentencing hearing.

Assistant United States Attorney Carolyn J. Bloch is prosecuting this case on behalf of the government.

The Federal Bureau of Investigation conducted the investigation that led to the prosecution of Kalina.

Will You Forgive Your Supervisor’s Wrongdoings? The Moral Licensing Effect of Ethical Leader Behaviors

Rong Wang and Darius K.-S. Chan
Front. Psychol., 05 March 2019
https://doi.org/10.3389/fpsyg.2019.00484

Abstract

Moral licensing theory suggests that observers may liberate actors to behave in morally questionable ways due to the actors’ history of moral behaviors. Drawing on this view, a scenario experiment with a 2 (high vs. low ethical) × 2 (internal vs. external motivation) between-subject design (N = 455) was conducted in the current study. We examined whether prior ethical leader behaviors cause subordinates to license subsequent abusive supervision, as well as the moderating role of behavior motivation on such effects. The results showed that when supervisors demonstrated prior ethical behaviors, subordinates, as victims, liberated them to act in abusive ways. Specifically, subordinates showed high levels of tolerance and low levels of condemnation toward abusive supervision and seldom experienced emotional responses to supervisors’ abusive behaviors. Moreover, subordinates tended to attribute abusive supervision, viewed as a kind of mistreatment without an immediate intent to cause harm, to characteristics of the victims and of the organization rather than of the supervisors per se. When supervisors behaved morally out of internal rather than external motivations, the aforementioned licensing effects were stronger.

Here is a portion of the Discussion

The main findings of this research have some implications for organizational practice. Subordinates have a tendency to liberate leaders’ morally questionable behaviors after observing leaders’ prior ethical behaviors, which may tolerate and even encourage the existence of destructive leadership styles. First, organizations can take steps including training and interventions to strengthen ethical climate. Organizations’ ethical climate is not only helpful to manage the ethical behaviors within the organizations, but also has impact on shaping organizational members’ zero-tolerance attitude to leaders’ mistreatments and questionable behaviors (Bartels et al., 1998).

Monday, April 1, 2019

Psychiatrist suspended for ‘inappropriate relationship.’ He got a $196K state job.

Steve Contorno & Lawrence Mower
www.miamiherald.com
Originally posted February 28, 2019

Less than a year ago, Domingo Cerra Fernandez was suspended from practicing medicine in the state of Florida.

The Ocala psychiatrist allegedly committed one of the cardinal sins of his discipline: He propositioned a patient to have a sexual and romantic relationship with him. He then continued to treat her.

But just months after his Florida suspension ended, Cerra Fernandez has a new job. He’s a senior physician at the North Florida Evaluation and Treatment Center, a maximum-security state-run treatment facility for mentally disabled adult male patients.

How did a recently suspended psychiatrist find himself working with some of Florida’s most vulnerable and dangerous residents, with a $196,000 annual salary?

The Department of Children and Families, which runs the facility, knew about his case before hiring him to a job that had been vacant for more than a year. DaMonica Smith, a department spokeswoman, told the Herald/Times that Cerra Fernandez was up front about his discipline.

The info is here.

Neuroscience Readies for a Showdown Over Consciousness Ideas

Philip Ball
Quanta Magazine
Originally published March 6, 2019

Here is an excerpt:

Philosophers have debated the nature of consciousness and whether it can inhere in things other than humans for thousands of years, but in the modern era, pressing practical and moral implications make the need for answers more urgent. As artificial intelligence (AI) grows increasingly sophisticated, it might become impossible to tell whether one is dealing with a machine or a human  merely by interacting with it — the classic Turing test. But would that mean AI deserves moral consideration?

Understanding consciousness also impinges on animal rights and welfare, and on a wide range of medical and legal questions about mental impairments. A group of more than 50 leading neuroscientists, psychologists, cognitive scientists and others recently called for greater recognition of the importance of research on this difficult subject. “Theories of consciousness need to be tested rigorously and revised repeatedly amid the long process of accumulation of empirical evidence,” the authors said, adding that “myths and speculative conjectures also need to be identified as such.”

You can hardly do experiments on consciousness without having first defined it. But that’s already difficult because we use the word in several ways. Humans are conscious beings, but we can lose consciousness, for example under anesthesia. We can say we are conscious of something — a strange noise coming out of our laptop, say. But in general, the quality of consciousness refers to a capacity to experience one’s existence rather than just recording it or responding to stimuli like an automaton. Philosophers of mind often refer to this as the principle that one can meaningfully speak about what it is to be “like” a conscious being — even if we can never actually have that experience beyond ourselves.

The info is here.

Sunday, March 31, 2019

Is Ethical A.I. Even Possible?

Cade Metz
The New York Times
Originally posted March 1, 2019

Here is an excerpt:

As companies and governments deploy these A.I. technologies, researchers are also realizing that some systems are woefully biased. Facial recognition services, for instance, can be significantly less accurate when trying to identify women or someone with darker skin. Other systems may include security holes unlike any seen in the past. Researchers have shown that driverless cars can be fooled into seeing things that are not really there.

All this means that building ethical artificial intelligence is an enormously complex task. It gets even harder when stakeholders realize that ethics are in the eye of the beholder.

As some Microsoft employees protest the company’s military contracts, Mr. Smith said that American tech companies had long supported the military and that they must continue to do so. “The U.S. military is charged with protecting the freedoms of this country,” he told the conference. “We have to stand by the people who are risking their lives.”

Though some Clarifai employees draw an ethical line at autonomous weapons, others do not. Mr. Zeiler argued that autonomous weapons will ultimately save lives because they would be more accurate than weapons controlled by human operators. “A.I. is an essential tool in helping weapons become more accurate, reducing collateral damage, minimizing civilian casualties and friendly fire incidents,” he said in a statement.

The info is here.