Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, December 15, 2019

The automation of ethics: the case of self-driving cars

Raffaele Rodogno, Marco Nørskov
Forthcoming in C. Hasse and
D. M. Søndergaard (Eds.)
Designing Robots.

Abstract
This paper explores the disruptive potential of artificial moral decision-makers on our moral practices by investigating the concrete case of self-driving cars. Our argument unfolds in two movements, the first purely philosophical, the second bringing self-driving cars into the picture.  More in particular, in the first movement, we bring to the fore three features of moral life, to wit, (i) the limits of the cognitive and motivational capacities of moral agents; (ii) the limited extent to which moral norms regulate human interaction; and (iii) the inescapable presence of tragic choices and moral dilemmas as part of moral life. Our ultimate aim, however, is not to provide a mere description of moral life in terms of these features but to show how a number of central moral practices can be envisaged as a response to, or an expression of these features. With this understanding of moral practices in hand, we broach the second movement. Here we expand our study to the transformative potential that self-driving cars would have on our moral practices . We locate two cases of potential disruption. First, the advent of self-driving cars would force us to treat as unproblematic the normative regulation of interactions that are inescapably tragic. More concretely, we will need to programme these cars’ algorithms as if there existed uncontroversial answers to the moral dilemmas that they might well face once on the road. Second, the introduction of this technology would contribute to the dissipation of accountability and lead to the partial truncation of our moral practices. People will be harmed and suffer in road related accidents, but it will be harder to find anyone to take responsibility for what happened—i.e. do what we normally do when we need to repair human relations after harm, loss, or suffering has occurred.

The book chapter is here.

Saturday, December 14, 2019

The Dark Psychology of Social Networks

Jonathan Haidt and Tobias Rose-Stockwell
The Atlantic
Originally posted December 2019

Her are two excerpts:

Human beings evolved to gossip, preen, manipulate, and ostracize. We are easily lured into this new gladiatorial circus, even when we know that it can make us cruel and shallow. As the Yale psychologist Molly Crockett has argued, the normal forces that might stop us from joining an outrage mob—such as time to reflect and cool off, or feelings of empathy for a person being humiliated—are attenuated when we can’t see the person’s face, and when we are asked, many times a day, to take a side by publicly “liking” the condemnation.

In other words, social media turns many of our most politically engaged citizens into Madison’s nightmare: arsonists who compete to create the most inflammatory posts and images, which they can distribute across the country in an instant while their public sociometer displays how far their creations have traveled.

(cut)

Twitter also made a key change in 2009, adding the “Retweet” button. Until then, users had to copy and paste older tweets into their status updates, a small obstacle that required a few seconds of thought and attention. The Retweet button essentially enabled the frictionless spread of content. A single click could pass someone else’s tweet on to all of your followers—and let you share in the credit for contagious content. In 2012, Facebook offered its own version of the retweet, the “Share” button, to its fastest-growing audience: smartphone users.

Chris Wetherell was one of the engineers who created the Retweet button for Twitter. He admitted to BuzzFeed earlier this year that he now regrets it. As Wetherell watched the first Twitter mobs use his new tool, he thought to himself: “We might have just handed a 4-year-old a loaded weapon.”

The coup de grâce came in 2012 and 2013, when Upworthy and other sites began to capitalize on this new feature set, pioneering the art of testing headlines across dozens of variations to find the version that generated the highest click-through rate. This was the beginning of “You won’t believe …” articles and their ilk, paired with images tested and selected to make us click impulsively. These articles were not usually intended to cause outrage (the founders of Upworthy were more interested in uplift). But the strategy’s success ensured the spread of headline testing, and with it emotional story-packaging, through new and old media alike; outrageous, morally freighted headlines proliferated in the following years.

The info is here.

Friday, December 13, 2019

The Ethical Dilemma at the Heart of Big Tech Companies

Emanuel Moss and Jacob Metcalf
Harvard Business Review
Originally posted 14 Nov 19

Here is an excerpt:

The central challenge ethics owners are grappling with is negotiating between external pressures to respond to ethical crises at the same time that they must be responsive to the internal logics of their companies and the industry. On the one hand, external criticisms push them toward challenging core business practices and priorities. On the other hand, the logics of Silicon Valley, and of business more generally, create pressures to establish or restore predictable processes and outcomes that still serve the bottom line.

We identified three distinct logics that characterize this tension between internal and external pressures:

Meritocracy: Although originally coined as a derisive term in satirical science fiction by British sociologist Michael Young, meritocracy infuses everything in Silicon Valley from hiring practices to policy positions, and retroactively justifies the industry’s power in our lives. As such, ethics is often framed with an eye toward smarter, better, and faster approaches, as if the problems of the tech industry can be addressed through those virtues. Given this, it is not surprising that many within the tech industry position themselves as the actors best suited to address ethical challenges, rather than less technically-inclined stakeholders, including elected officials and advocacy groups. In our interviews, this manifested in relying on engineers to use their personal judgement by “grappling with the hard questions on the ground,” trusting them to discern and to evaluate the ethical stakes of their own products. While there are some rigorous procedures that help designers scan for the consequences of their products, sitting in a room and “thinking hard” about the potential harms of a product in the real world is not the same as thoroughly understanding how someone (whose life is very different than a software engineer) might be affected by things like predictive policing or facial recognition technology, as obvious examples. Ethics owners find themselves being pulled between technical staff that assert generalized competence over many domains and their own knowledge that ethics is a specialized domain that requires deep contextual understanding.

The info is here.

Conference warned of dangers of facial recognition technology

Because of new technologies, “we are all monitored and recorded every minute of every day of our lives”, a conference has heard. Photograph: iStockColm Keena
The Irish Times
Originally posted 13 Nov 19

Here is an excerpt:

The potential of facial recognition technology to be used by oppressive governments and manipulative corporations was such that some observers have called for it to be banned. The suggestion should be taken seriously, Dr Danaher said.

The technology is “like a fingerprint of your face”, is cheap, and “normalises blanket surveillance”. This makes it “perfect” for oppressive governments and for manipulative corporations.

While the EU’s GDPR laws on the use of data applied here, Dr Danaher said Ireland should also introduce domestic law “to save us from the depredations of facial recognition technology”.

As well as facial recognition technology, he also addressed the conference about “deepfake” technology, which allows for the creation of highly convincing fake video content, and algorithms that assess risk, as other technologies that are creating challenges for the law.

In the US, the use of algorithms to predict a person’s likelihood of re-offending has raised significant concerns.

The info is here.

Thursday, December 12, 2019

State Supreme Court upholds decision in Beckley psychiatrist case

Jessica Farrish
The Register-Herald
Originally posted 8 Nov 19

West Virginia Supreme Court of Appeals on Friday upheld a decision by the West Virginia Board of Medicine that imposed disciplinary actions on Dr. Omar Hasan, a Beckley psychiatrist.

The original case was decided in Kanawha County Circuit Court in July 2018 after Hasan appealed a decision by the West Virginia Board of Medicine to discipline him for an improper relationship with a patient. Hasan alleged the board had erred by failing to adopt recommended finding of facts by its own hearing examiner, had improperly considered content of text messages and had misstated various facts in its final order.

Court documents state that Hasan began providing psychiatric medication in 2011 to a female patient. In September 2014, the patient reported to WVBOM that she and Hasan had had an improper relationship that included texts, phone calls, gifts and “sexual encounters on numerous occasions at various locations.”

She said that when Hasan ended the relationship, she tried to kill herself.

WVBOM investigated the patient’s claim and found probable cause to issue disciplinary actions against Hasan for entering a relationship with a patient for sexual satisfaction and for failing to cut off the patient-provider relationship once the texts had become sexual in nature, according to court filings.

Both are in violation of state law.

The info is here.

Donald Hoffman: The Case Against Reality

The Institute of Arts and Ideas
Originally published September 8, 2019


Many scientists believe that natural selection brought our perception of reality into clearer and deeper focus, reasoning that growing more attuned to the outside world gave our ancestors an evolutionary edge. Donald Hoffman, a cognitive scientist at the University of California, Irvine, thinks that just the opposite is true. Because evolution selects for survival, not accuracy, he proposes that our conscious experience masks reality behind millennia of adaptions for ‘fitness payoffs’ – an argument supported by his work running evolutionary game-theory simulations. In this interview recorded at the HowTheLightGetsIn Festival from the Institute of Arts and Ideas in 2019, Hoffman explains why he believes that perception must necessarily hide reality for conscious agents to survive and reproduce. With that view serving as a springboard, the wide-ranging discussion also touches on Hoffman’s consciousness-centric framework for reality, and its potential implications for our everyday lives.

Editor Note: If you work as a mental health professional, this video may be helpful in understanding perceptions, understanding self, and consciousness.

Wednesday, December 11, 2019

Veil-of-ignorance reasoning favors the greater good

Karen Huang, Joshua D. Greene and Max Bazerman
PNAS first published November 12, 2019
https://doi.org/10.1073/pnas.1910125116

Abstract

The “veil of ignorance” is a moral reasoning device designed to promote impartial decision-making by denying decision-makers access to potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here we apply veil-of-ignorance reasoning in a more focused way to specific moral dilemmas, all of which involve a tension between the greater good and competing moral concerns. Across six experiments (N = 5,785), three pre-registered, we find that veil-of-ignorance reasoning favors the greater good. Participants first engaged in veil-of-ignorance reasoning about a specific dilemma, asking themselves what they would want if they did not know who among those affected they would be. Participants then responded to a more conventional version of the same dilemma with a moral judgment, a policy preference, or an economic choice. Participants who first engaged in veil-of-ignorance reasoning subsequently made more utilitarian choices in response to a classic philosophical dilemma, a medical dilemma, a real donation decision between a more vs. less effective charity, and a policy decision concerning the social dilemma of autonomous vehicles. These effects depend on the impartial thinking induced by veil-of-ignorance reasoning and cannot be explained by a simple anchoring account, probabilistic reasoning, or generic perspective-taking. These studies indicate that veil-of-ignorance reasoning may be a useful tool for decision-makers who wish to make more impartial and/or socially beneficial choices.

Significance

The philosopher John Rawls aimed to identify fair governing principles by imagining people choosing their principles from behind a “veil of ignorance,” without knowing their places in the social order. Across 7 experiments with over 6,000 participants, we show that veil-of-ignorance reasoning leads to choices that favor the greater good. Veil-of-ignorance reasoning makes people more likely to donate to a more effective charity and to favor saving more lives in a bioethical dilemma. It also addresses the social dilemma of autonomous vehicles (AVs), aligning abstract approval of utilitarian AVs (which minimize total harm) with support for a utilitarian AV policy. These studies indicate that veil-of-ignorance reasoning may be used to promote decision making that is more impartial and socially beneficial.

When Assessing Novel Risks, Facts Are Not Enough

Baruch Fischoff
Scientific American
September 2019

Here is an excerpt:

To start off, we wanted to figure out how well the general public understands the risks they face in everyday life. We asked groups of laypeople to estimate the annual death toll from causes such as drowning, emphysema and homicide and then compared their estimates with scientific ones. Based on previous research, we expected that people would make generally accurate predictions but that they would overestimate deaths from causes that get splashy or frequent headlines—murders, tornadoes—and underestimate deaths from “quiet killers,” such as stroke and asthma, that do not make big news as often.

Overall, our predictions fared well. People overestimated highly reported causes of death and underestimated ones that received less attention. Images of terror attacks, for example, might explain why people who watch more television news worry more about terrorism than individuals who rarely watch. But one puzzling result emerged when we probed these beliefs. People who were strongly opposed to nuclear power believed that it had a very low annual death toll. Why, then, would they be against it? The apparent paradox made us wonder if by asking them to predict average annual death tolls, we had defined risk too narrowly. So, in a new set of questions we asked what risk really meant to people. When we did, we found that those opposed to nuclear power thought the technology had a greater potential to cause widespread catastrophes. That pattern held true for other technologies as well.

To find out whether knowing more about a technology changed this pattern, we asked technical experts the same questions. The experts generally agreed with laypeople about nuclear power's death toll for a typical year: low. But when they defined risk themselves, on a broader time frame, they saw less potential for problems. The general public, unlike the experts, emphasized what could happen in a very bad year. The public and the experts were talking past each other and focusing on different parts of reality.

The info is here.

Tuesday, December 10, 2019

Medicare for All: Would It Work? And Who Would Pay?

Ezekiel (Zeke) Emanuel
Podcast - Wharton
Originally posted 12 Nov 19

Here is an excerpt:

“If you want to control costs, there are at least three main areas you have to look at: drug costs, hospital costs to the private sector, and administrative costs,” he said. “All of them are out of whack. All of them are ballooned.”

On drug costs, for example, it is not clear if that would be achieved through negotiations with drug companies or by the government setting a price ceiling, Emanuel said. He suggested a way out: “We should have negotiations informed by value-based pricing,” he said. “How much health benefit does the drug give? The more the health benefit, the higher the price of the drug. But we do need to have caps.”

Emanuel also faulted Warren’s idea to limit payments to hospitals at 110% of Medicare rates as unwise. He suggested 120% of Medicare rates, adding that it would “probably have no real pushback from most of the health policy people, especially if you do have a reduction in administrative costs and a reduction in drug costs.” he said.

Emanuel pointed to a recent Rand Corporation study which showed that on average, private health plans pay more than 240% of Medicare rates for hospital services. “That seems way out of whack,” he said. “There are a lot of hospital monopolies, and consolidation has led to price increases – not quality increases as claimed. We do have to rein in hospital prices.” The big question is how that could be achieved, which may include placing a cap on those prices, he added.

On reining in administrative costs, Emanuel saw hope. He noted that the private sector spends an average of 12% on administrative costs, and he blamed that on insurance companies and employers wanting to design their own employee health plans. He suggested a set of five or 10 standardized plans from which employers could choose, adding that common health plans work well in countries like the Netherlands, Germany and Switzerland. Japan has 1,600 insurance companies, but standardized health plans and a centralized clearinghouse helps keep administrative costs low, he added.

The info is here.