Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, December 16, 2019

Courts Strike Down Trump’s ‘Refusal of Care’ Conscience Rule

Alicia Gallegos
mdedge.com
Originally posted 7 Nov 19

A federal court has struck down a Trump administration rule that would have allowed clinicians to refuse to provide medical care to patients for religious or moral reasons.

In a Nov. 6 decision, the U.S. District Court for the Southern District of New York vacated President Trump’s rule in its entirety, concluding that the rule had no justification and that its provisions were arbitrary and capricious. In his 147-page opinion, District Judge Paul Engelmayer wrote that the U.S. Department of Health & Human Services did not have the authority to enact such an expansive rule and that the measure conflicts with the Administrative Procedure Act, Title VII of the Civil Rights Act, and the Emergency Medical Treatment & Labor Act, among other laws.

“Had the court found only narrow parts of the rule infirm, a remedy tailoring the vacatur to only the problematic provision might well have been viable,” Judge Engelmayer wrote. “The [Administrative Procedure Act] violations that the court has found, however, are numerous, fundamental, and far reaching ... In these circumstances, a decision to leave standing isolated shards of the rule that have not been found specifically infirm would ignore the big picture: that the rulemaking exercise here was sufficiently shot through with glaring legal defects as to not justify a search for survivors [and] leaving stray nonsubstantive provisions intact would not serve a useful purpose.”

At press time, the Trump administration had not indicated whether they plan to file an appeal.

The info is here.

Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation

Ishowo-Oloko, F., Bonnefon, J., Soroye, Z. et al.
Nat Mach Intell 1, 517–521 (2019)
doi:10.1038/s42256-019-0113-5

Abstract

Recent advances in artificial intelligence and deep learning have made it possible for bots to pass as humans, as is the case with the recent Google Duplex—an automated voice assistant capable of generating realistic speech that can fool humans into thinking they are talking to another human. Such technologies have drawn sharp criticism due to their ethical implications, and have fueled a push towards transparency in human–machine interactions. Despite the legitimacy of these concerns, it remains unclear whether bots would compromise their efficiency by disclosing their true nature. Here, we conduct a behavioural experiment with participants playing a repeated prisoner’s dilemma game with a human or a bot, after being given either true or false information about the nature of their associate. We find that bots do better than humans at inducing cooperation, but that disclosing their true nature negates this superior efficiency. Human participants do not recover from their prior bias against bots despite experiencing cooperative attitudes exhibited by bots over time. These results highlight the need to set standards for the efficiency cost we are willing to pay in order for machines to be transparent about their non-human nature.

Sunday, December 15, 2019

The automation of ethics: the case of self-driving cars

Raffaele Rodogno, Marco Nørskov
Forthcoming in C. Hasse and
D. M. Søndergaard (Eds.)
Designing Robots.

Abstract
This paper explores the disruptive potential of artificial moral decision-makers on our moral practices by investigating the concrete case of self-driving cars. Our argument unfolds in two movements, the first purely philosophical, the second bringing self-driving cars into the picture.  More in particular, in the first movement, we bring to the fore three features of moral life, to wit, (i) the limits of the cognitive and motivational capacities of moral agents; (ii) the limited extent to which moral norms regulate human interaction; and (iii) the inescapable presence of tragic choices and moral dilemmas as part of moral life. Our ultimate aim, however, is not to provide a mere description of moral life in terms of these features but to show how a number of central moral practices can be envisaged as a response to, or an expression of these features. With this understanding of moral practices in hand, we broach the second movement. Here we expand our study to the transformative potential that self-driving cars would have on our moral practices . We locate two cases of potential disruption. First, the advent of self-driving cars would force us to treat as unproblematic the normative regulation of interactions that are inescapably tragic. More concretely, we will need to programme these cars’ algorithms as if there existed uncontroversial answers to the moral dilemmas that they might well face once on the road. Second, the introduction of this technology would contribute to the dissipation of accountability and lead to the partial truncation of our moral practices. People will be harmed and suffer in road related accidents, but it will be harder to find anyone to take responsibility for what happened—i.e. do what we normally do when we need to repair human relations after harm, loss, or suffering has occurred.

The book chapter is here.

Saturday, December 14, 2019

The Dark Psychology of Social Networks

Jonathan Haidt and Tobias Rose-Stockwell
The Atlantic
Originally posted December 2019

Her are two excerpts:

Human beings evolved to gossip, preen, manipulate, and ostracize. We are easily lured into this new gladiatorial circus, even when we know that it can make us cruel and shallow. As the Yale psychologist Molly Crockett has argued, the normal forces that might stop us from joining an outrage mob—such as time to reflect and cool off, or feelings of empathy for a person being humiliated—are attenuated when we can’t see the person’s face, and when we are asked, many times a day, to take a side by publicly “liking” the condemnation.

In other words, social media turns many of our most politically engaged citizens into Madison’s nightmare: arsonists who compete to create the most inflammatory posts and images, which they can distribute across the country in an instant while their public sociometer displays how far their creations have traveled.

(cut)

Twitter also made a key change in 2009, adding the “Retweet” button. Until then, users had to copy and paste older tweets into their status updates, a small obstacle that required a few seconds of thought and attention. The Retweet button essentially enabled the frictionless spread of content. A single click could pass someone else’s tweet on to all of your followers—and let you share in the credit for contagious content. In 2012, Facebook offered its own version of the retweet, the “Share” button, to its fastest-growing audience: smartphone users.

Chris Wetherell was one of the engineers who created the Retweet button for Twitter. He admitted to BuzzFeed earlier this year that he now regrets it. As Wetherell watched the first Twitter mobs use his new tool, he thought to himself: “We might have just handed a 4-year-old a loaded weapon.”

The coup de grâce came in 2012 and 2013, when Upworthy and other sites began to capitalize on this new feature set, pioneering the art of testing headlines across dozens of variations to find the version that generated the highest click-through rate. This was the beginning of “You won’t believe …” articles and their ilk, paired with images tested and selected to make us click impulsively. These articles were not usually intended to cause outrage (the founders of Upworthy were more interested in uplift). But the strategy’s success ensured the spread of headline testing, and with it emotional story-packaging, through new and old media alike; outrageous, morally freighted headlines proliferated in the following years.

The info is here.

Friday, December 13, 2019

The Ethical Dilemma at the Heart of Big Tech Companies

Emanuel Moss and Jacob Metcalf
Harvard Business Review
Originally posted 14 Nov 19

Here is an excerpt:

The central challenge ethics owners are grappling with is negotiating between external pressures to respond to ethical crises at the same time that they must be responsive to the internal logics of their companies and the industry. On the one hand, external criticisms push them toward challenging core business practices and priorities. On the other hand, the logics of Silicon Valley, and of business more generally, create pressures to establish or restore predictable processes and outcomes that still serve the bottom line.

We identified three distinct logics that characterize this tension between internal and external pressures:

Meritocracy: Although originally coined as a derisive term in satirical science fiction by British sociologist Michael Young, meritocracy infuses everything in Silicon Valley from hiring practices to policy positions, and retroactively justifies the industry’s power in our lives. As such, ethics is often framed with an eye toward smarter, better, and faster approaches, as if the problems of the tech industry can be addressed through those virtues. Given this, it is not surprising that many within the tech industry position themselves as the actors best suited to address ethical challenges, rather than less technically-inclined stakeholders, including elected officials and advocacy groups. In our interviews, this manifested in relying on engineers to use their personal judgement by “grappling with the hard questions on the ground,” trusting them to discern and to evaluate the ethical stakes of their own products. While there are some rigorous procedures that help designers scan for the consequences of their products, sitting in a room and “thinking hard” about the potential harms of a product in the real world is not the same as thoroughly understanding how someone (whose life is very different than a software engineer) might be affected by things like predictive policing or facial recognition technology, as obvious examples. Ethics owners find themselves being pulled between technical staff that assert generalized competence over many domains and their own knowledge that ethics is a specialized domain that requires deep contextual understanding.

The info is here.

Conference warned of dangers of facial recognition technology

Because of new technologies, “we are all monitored and recorded every minute of every day of our lives”, a conference has heard. Photograph: iStockColm Keena
The Irish Times
Originally posted 13 Nov 19

Here is an excerpt:

The potential of facial recognition technology to be used by oppressive governments and manipulative corporations was such that some observers have called for it to be banned. The suggestion should be taken seriously, Dr Danaher said.

The technology is “like a fingerprint of your face”, is cheap, and “normalises blanket surveillance”. This makes it “perfect” for oppressive governments and for manipulative corporations.

While the EU’s GDPR laws on the use of data applied here, Dr Danaher said Ireland should also introduce domestic law “to save us from the depredations of facial recognition technology”.

As well as facial recognition technology, he also addressed the conference about “deepfake” technology, which allows for the creation of highly convincing fake video content, and algorithms that assess risk, as other technologies that are creating challenges for the law.

In the US, the use of algorithms to predict a person’s likelihood of re-offending has raised significant concerns.

The info is here.

Thursday, December 12, 2019

State Supreme Court upholds decision in Beckley psychiatrist case

Jessica Farrish
The Register-Herald
Originally posted 8 Nov 19

West Virginia Supreme Court of Appeals on Friday upheld a decision by the West Virginia Board of Medicine that imposed disciplinary actions on Dr. Omar Hasan, a Beckley psychiatrist.

The original case was decided in Kanawha County Circuit Court in July 2018 after Hasan appealed a decision by the West Virginia Board of Medicine to discipline him for an improper relationship with a patient. Hasan alleged the board had erred by failing to adopt recommended finding of facts by its own hearing examiner, had improperly considered content of text messages and had misstated various facts in its final order.

Court documents state that Hasan began providing psychiatric medication in 2011 to a female patient. In September 2014, the patient reported to WVBOM that she and Hasan had had an improper relationship that included texts, phone calls, gifts and “sexual encounters on numerous occasions at various locations.”

She said that when Hasan ended the relationship, she tried to kill herself.

WVBOM investigated the patient’s claim and found probable cause to issue disciplinary actions against Hasan for entering a relationship with a patient for sexual satisfaction and for failing to cut off the patient-provider relationship once the texts had become sexual in nature, according to court filings.

Both are in violation of state law.

The info is here.

Donald Hoffman: The Case Against Reality

The Institute of Arts and Ideas
Originally published September 8, 2019


Many scientists believe that natural selection brought our perception of reality into clearer and deeper focus, reasoning that growing more attuned to the outside world gave our ancestors an evolutionary edge. Donald Hoffman, a cognitive scientist at the University of California, Irvine, thinks that just the opposite is true. Because evolution selects for survival, not accuracy, he proposes that our conscious experience masks reality behind millennia of adaptions for ‘fitness payoffs’ – an argument supported by his work running evolutionary game-theory simulations. In this interview recorded at the HowTheLightGetsIn Festival from the Institute of Arts and Ideas in 2019, Hoffman explains why he believes that perception must necessarily hide reality for conscious agents to survive and reproduce. With that view serving as a springboard, the wide-ranging discussion also touches on Hoffman’s consciousness-centric framework for reality, and its potential implications for our everyday lives.

Editor Note: If you work as a mental health professional, this video may be helpful in understanding perceptions, understanding self, and consciousness.

Wednesday, December 11, 2019

Veil-of-ignorance reasoning favors the greater good

Karen Huang, Joshua D. Greene and Max Bazerman
PNAS first published November 12, 2019
https://doi.org/10.1073/pnas.1910125116

Abstract

The “veil of ignorance” is a moral reasoning device designed to promote impartial decision-making by denying decision-makers access to potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here we apply veil-of-ignorance reasoning in a more focused way to specific moral dilemmas, all of which involve a tension between the greater good and competing moral concerns. Across six experiments (N = 5,785), three pre-registered, we find that veil-of-ignorance reasoning favors the greater good. Participants first engaged in veil-of-ignorance reasoning about a specific dilemma, asking themselves what they would want if they did not know who among those affected they would be. Participants then responded to a more conventional version of the same dilemma with a moral judgment, a policy preference, or an economic choice. Participants who first engaged in veil-of-ignorance reasoning subsequently made more utilitarian choices in response to a classic philosophical dilemma, a medical dilemma, a real donation decision between a more vs. less effective charity, and a policy decision concerning the social dilemma of autonomous vehicles. These effects depend on the impartial thinking induced by veil-of-ignorance reasoning and cannot be explained by a simple anchoring account, probabilistic reasoning, or generic perspective-taking. These studies indicate that veil-of-ignorance reasoning may be a useful tool for decision-makers who wish to make more impartial and/or socially beneficial choices.

Significance

The philosopher John Rawls aimed to identify fair governing principles by imagining people choosing their principles from behind a “veil of ignorance,” without knowing their places in the social order. Across 7 experiments with over 6,000 participants, we show that veil-of-ignorance reasoning leads to choices that favor the greater good. Veil-of-ignorance reasoning makes people more likely to donate to a more effective charity and to favor saving more lives in a bioethical dilemma. It also addresses the social dilemma of autonomous vehicles (AVs), aligning abstract approval of utilitarian AVs (which minimize total harm) with support for a utilitarian AV policy. These studies indicate that veil-of-ignorance reasoning may be used to promote decision making that is more impartial and socially beneficial.