Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, October 9, 2018

Morality is the new profit – banks must learn or die

Zoe Williams
The Guardian
Originally posted September 10, 2018

Here is an excerpt:

Ten years ago, “ethical” investing meant not buying shares in arms and alcohol, as if morality were so unfamiliar to financial decision-making that you had to go back to the 19th century and borrow it from the Quakers. The growth of banks with a moral mission – like Triodos (“quality of life, human dignity, sustainability”) – or investments with a social purpose – like Abundance, which finances renewable energy – has been impressive on its own terms, but remained niche, for baby boomers with a conscience. The idea that all market activity should have a purpose other than profit is roughly where it always was on the spectrum, somewhere between Marx and Jesus – one for the rioters, the subversives, the people with beards, unsuited to mainstream discourse.

But there is nothing more pragmatic and less idealistic than to insist on the social purpose of the market; banking cannot survive without it – not as a corporate bolt-on but as its driving and decisive motivation. The derivatives trade cannot weather the consequences of infinite self-interest, because there really will be consequences – extreme global ones. The planet cannot survive an endless cost-benefit analysis in which nature is pitted against profit. Nature will always lose and so will humanity as a result. Whatever the immediate cause of the next crash, if and when it comes its roots will be environmental. The Financial Times talks about “the insidious danger that pension funds deflate, leaving a generation without enough money to retire”. The most likely cause for that devaluation of pensions – leaving aside the generation that cannot afford to save for the future – will be stranded assets, pension funds having invested in fossil fuels that cannot be excavated.

The info is here.

Top Cancer Researcher Fails to Disclose Corporate Financial Ties in Major Research Journals

Charles Ornstein and Katie Thomas
The New York Times
Originally published September 8, 2018

One of the world’s top breast cancer doctors failed to disclose millions of dollars in payments from drug and health care companies in recent years, omitting his financial ties from dozens of research articles in prestigious publications like The New England Journal of Medicine and The Lancet.

The researcher, Dr. José Baselga, a towering figure in the cancer world, is the chief medical officer at Memorial Sloan Kettering Cancer Center in New York. He has held board memberships or advisory roles with Roche and Bristol-Myers Squibb, among other corporations, has had a stake in start-ups testing cancer therapies, and played a key role in the development of breakthrough drugs that have revolutionized treatments for breast cancer.

According to an analysis by The New York Times and ProPublica, Dr. Baselga did not follow financial disclosure rules set by the American Association for Cancer Research when he was president of the group. He also left out payments he received from companies connected to cancer research in his articles published in the group’s journal, Cancer Discovery. At the same time, he has been one of the journal’s two editors in chief.

The info is here.

Monday, October 8, 2018

Purpose, Meaning and Morality Without God

Ralph Lewis
Psychology Today Blog
Originally posted September 9, 2018

Here is an excerpt:

Religion is not the source of purpose, meaning and morality. Rather, religion can be understood as having incorporated these natural motivational and social dispositions and having coevolved with human cultures over time. Unsurprisingly, religion has also incorporated our more selfish, aggressive, competitive, and xenophobic human proclivities.

Modern secular societies with the lowest levels of religious belief have achieved far more compassion and flourishing than religious ones.

Secular humanists understand that societal ethics and compassion are achieved solely through human action in a fully natural world. We can rely only on ourselves and our fellow human beings. All we have is each other, huddled together on this lifeboat of a little planet in this vast indifferent universe.

We will need to continue to work actively toward the collective goal of more caring societies in order to further strengthen the progress of our species.

Far from being nihilistic, the fully naturalist worldview of secular humanism empowers us and liberates us from our irrational fears, and from our feelings of abandonment by the god we were told would take care of us, and motivates us to live with a sense of interdependent humanistic purpose. This deepens our feelings of value, engagement, and relatedness. People can and do care, even if universe doesn’t.

The blog post is here.

Evolutionary Psychology

Downes, Stephen M.
The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.)

Evolutionary psychology is one of many biologically informed approaches to the study of human behavior. Along with cognitive psychologists, evolutionary psychologists propose that much, if not all, of our behavior can be explained by appeal to internal psychological mechanisms. What distinguishes evolutionary psychologists from many cognitive psychologists is the proposal that the relevant internal mechanisms are adaptations—products of natural selection—that helped our ancestors get around the world, survive and reproduce. To understand the central claims of evolutionary psychology we require an understanding of some key concepts in evolutionary biology, cognitive psychology, philosophy of science and philosophy of mind. Philosophers are interested in evolutionary psychology for a number of reasons. For philosophers of science —mostly philosophers of biology—evolutionary psychology provides a critical target. There is a broad consensus among philosophers of science that evolutionary psychology is a deeply flawed enterprise. For philosophers of mind and cognitive science evolutionary psychology has been a source of empirical hypotheses about cognitive architecture and specific components of that architecture. Philosophers of mind are also critical of evolutionary psychology but their criticisms are not as all-encompassing as those presented by philosophers of biology. Evolutionary psychology is also invoked by philosophers interested in moral psychology both as a source of empirical hypotheses and as a critical target.

The entry is here.

Sunday, October 7, 2018

Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?

Zerilli, J., Knott, A., Maclaurin, J. et al.
Philos. Technol. (2018).

Abstract

We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that at least some regulatory proposals for explainable AI could end up setting the bar higher than is necessary or indeed helpful. The demands of practical reason require the justification of action to be pitched at the level of practical reason. Decision tools that support or supplant practical reasoning should not be expected to aim higher than this. We cast this desideratum in terms of Daniel Dennett’s theory of the “intentional stance” and argue that since the justification of action for human purposes takes the form of intentional stance explanation, the justification of algorithmic decisions should take the same form. In practice, this means that the sorts of explanations for algorithmic decisions that are analogous to intentional stance explanations should be preferred over ones that aim at the architectural innards of a decision tool.

The article is here.

Saturday, October 6, 2018

Certainty Is Primarily Determined by Past Performance During Concept Learning

Louis Martí, Francis Mollica, Steven Piantadosi and Celeste Kidd
Open Mind: Discoveries in Cognitive Science
Posted Online August 16, 2018

Abstract

Prior research has yielded mixed findings on whether learners’ certainty reflects veridical probabilities from observed evidence. We compared predictions from an idealized model of learning to humans’ subjective reports of certainty during a Boolean concept-learning task in order to examine subjective certainty over the course of abstract, logical concept learning. Our analysis evaluated theoretically motivated potential predictors of certainty to determine how well each predicted participants’ subjective reports of certainty. Regression analyses that controlled for individual differences demonstrated that despite learning curves tracking the ideal learning models, reported certainty was best explained by performance rather than measures derived from a learning model. In particular, participants’ confidence was driven primarily by how well they observed themselves doing, not by idealized statistical inferences made from the data they observed.

Download the pdf here.

Key Points: In order to learn and understand, you need to use all the data you have accumulated, not just the feedback on your most recent performance.  In this way, feedback, rather than hard evidence, increases a person's sense of certainty when learning new things, or how to tell right from wrong.

Fascinating research, I hope I am interpreting it correctly.  I am not all that certain.

Friday, October 5, 2018

Nike picks a side in America’s culture wars

Andrew Edgecliffe-Johnson
Financial Times
Originally posted September 7, 2018

Here is an excerpt:

This is Nike’s second reason to be confident: drill down into this week’s polls and they show that support for Nike and Kaepernick is strongest among millennial or Gen-Z, African-American, liberal urbanites — the group Nike targets. The company’s biggest risk is becoming “mainstream, the usual, everywhere, tamed”, Prof Lee says. Courting controversy forces its most dedicated fans to defend it and catches the eye of more neutral consumers.

Finally, Nike will have been encouraged by studies showing that consumers reward brands for speaking up on divisive social issues. But it is doing something more novel and calculated than other multinationals that have weighed in on immigration, gun control or race: it did not stumble into this controversy; it sought it.

A polarised populace is a fact of life for brands, in the US and beyond. That leaves them with a choice: try to carry on catering to a vanishing mass-market middle ground, or stake out a position that will infuriate one side but excite the other. The latter strategy has worked for politicians such as Mr Trump. Unlike elected officials, a brand can win with far less than 50.1 per cent of the population behind it. (Nike chief executive Mark Parker told investors last year that it was looking to just 12 global cities to drive 80 per cent of its growth.)

The info is here.

Should Artificial Intelligence Augment Medical Decision Making? The Case for an Autonomy Algorithm

Camillo Lamanna and Lauren Byrne
AMA J Ethics. 2018;20(9):E902-910.

Abstract

A significant proportion of elderly and psychiatric patients do not have the capacity to make health care decisions. We suggest that machine learning technologies could be harnessed to integrate data mined from electronic health records (EHRs) and social media in order to estimate the confidence of the prediction that a patient would consent to a given treatment. We call this process, which takes data about patients as input and derives a confidence estimate for a particular patient’s predicted health care-related decision as an output, the autonomy algorithm. We suggest that the proposed algorithm would result in more accurate predictions than existing methods, which are resource intensive and consider only small patient cohorts. This algorithm could become a valuable tool in medical decision-making processes, augmenting the capacity of all people to make health care decisions in difficult situations.

The article is here.

Thursday, October 4, 2018

7 Short-Term AI ethics questions

Orlando Torres
www.towardsdatascience.com
Originally posted April 4, 2018

Here is an excerpt:

2. Transparency of Algorithms

Even more worrying than the fact that companies won’t allow their algorithms to be publicly scrutinized, is the fact that some algorithms are obscure even to their creators.
Deep learning is a rapidly growing technique in machine learning that makes very good predictions, but is not really able to explain why it made any particular prediction.

For example, some algorithms haven been used to fire teachers, without being able to give them an explanation of why the model indicated they should be fired.

How can we we balance the need for more accurate algorithms with the need for transparency towards people who are being affected by these algorithms? If necessary, are we willing to sacrifice accuracy for transparency, as Europe’s new General Data Protection Regulation may do? If it’s true that humans are likely unaware of their true motives for acting, should we demand machines be better at this than we actually are?

3. Supremacy of Algorithms

A similar but slightly different concern emerges from the previous two issues. If we start trusting algorithms to make decisions, who will have the final word on important decisions? Will it be humans, or algorithms?

For example, some algorithms are already being used to determine prison sentences. Given that we know judges’ decisions are affected by their moods, some people may argue that judges should be replaced with “robojudges”. However, a ProPublica study found that one of these popular sentencing algorithms was highly biased against blacks. To find a “risk score”, the algorithm uses inputs about a defendant’s acquaintances that would never be accepted as traditional evidence.

The info is here.