Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, October 8, 2018

Purpose, Meaning and Morality Without God

Ralph Lewis
Psychology Today Blog
Originally posted September 9, 2018

Here is an excerpt:

Religion is not the source of purpose, meaning and morality. Rather, religion can be understood as having incorporated these natural motivational and social dispositions and having coevolved with human cultures over time. Unsurprisingly, religion has also incorporated our more selfish, aggressive, competitive, and xenophobic human proclivities.

Modern secular societies with the lowest levels of religious belief have achieved far more compassion and flourishing than religious ones.

Secular humanists understand that societal ethics and compassion are achieved solely through human action in a fully natural world. We can rely only on ourselves and our fellow human beings. All we have is each other, huddled together on this lifeboat of a little planet in this vast indifferent universe.

We will need to continue to work actively toward the collective goal of more caring societies in order to further strengthen the progress of our species.

Far from being nihilistic, the fully naturalist worldview of secular humanism empowers us and liberates us from our irrational fears, and from our feelings of abandonment by the god we were told would take care of us, and motivates us to live with a sense of interdependent humanistic purpose. This deepens our feelings of value, engagement, and relatedness. People can and do care, even if universe doesn’t.

The blog post is here.

Evolutionary Psychology

Downes, Stephen M.
The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.)

Evolutionary psychology is one of many biologically informed approaches to the study of human behavior. Along with cognitive psychologists, evolutionary psychologists propose that much, if not all, of our behavior can be explained by appeal to internal psychological mechanisms. What distinguishes evolutionary psychologists from many cognitive psychologists is the proposal that the relevant internal mechanisms are adaptations—products of natural selection—that helped our ancestors get around the world, survive and reproduce. To understand the central claims of evolutionary psychology we require an understanding of some key concepts in evolutionary biology, cognitive psychology, philosophy of science and philosophy of mind. Philosophers are interested in evolutionary psychology for a number of reasons. For philosophers of science —mostly philosophers of biology—evolutionary psychology provides a critical target. There is a broad consensus among philosophers of science that evolutionary psychology is a deeply flawed enterprise. For philosophers of mind and cognitive science evolutionary psychology has been a source of empirical hypotheses about cognitive architecture and specific components of that architecture. Philosophers of mind are also critical of evolutionary psychology but their criticisms are not as all-encompassing as those presented by philosophers of biology. Evolutionary psychology is also invoked by philosophers interested in moral psychology both as a source of empirical hypotheses and as a critical target.

The entry is here.

Sunday, October 7, 2018

Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?

Zerilli, J., Knott, A., Maclaurin, J. et al.
Philos. Technol. (2018).

Abstract

We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that at least some regulatory proposals for explainable AI could end up setting the bar higher than is necessary or indeed helpful. The demands of practical reason require the justification of action to be pitched at the level of practical reason. Decision tools that support or supplant practical reasoning should not be expected to aim higher than this. We cast this desideratum in terms of Daniel Dennett’s theory of the “intentional stance” and argue that since the justification of action for human purposes takes the form of intentional stance explanation, the justification of algorithmic decisions should take the same form. In practice, this means that the sorts of explanations for algorithmic decisions that are analogous to intentional stance explanations should be preferred over ones that aim at the architectural innards of a decision tool.

The article is here.

Saturday, October 6, 2018

Certainty Is Primarily Determined by Past Performance During Concept Learning

Louis Martí, Francis Mollica, Steven Piantadosi and Celeste Kidd
Open Mind: Discoveries in Cognitive Science
Posted Online August 16, 2018

Abstract

Prior research has yielded mixed findings on whether learners’ certainty reflects veridical probabilities from observed evidence. We compared predictions from an idealized model of learning to humans’ subjective reports of certainty during a Boolean concept-learning task in order to examine subjective certainty over the course of abstract, logical concept learning. Our analysis evaluated theoretically motivated potential predictors of certainty to determine how well each predicted participants’ subjective reports of certainty. Regression analyses that controlled for individual differences demonstrated that despite learning curves tracking the ideal learning models, reported certainty was best explained by performance rather than measures derived from a learning model. In particular, participants’ confidence was driven primarily by how well they observed themselves doing, not by idealized statistical inferences made from the data they observed.

Download the pdf here.

Key Points: In order to learn and understand, you need to use all the data you have accumulated, not just the feedback on your most recent performance.  In this way, feedback, rather than hard evidence, increases a person's sense of certainty when learning new things, or how to tell right from wrong.

Fascinating research, I hope I am interpreting it correctly.  I am not all that certain.

Friday, October 5, 2018

Nike picks a side in America’s culture wars

Andrew Edgecliffe-Johnson
Financial Times
Originally posted September 7, 2018

Here is an excerpt:

This is Nike’s second reason to be confident: drill down into this week’s polls and they show that support for Nike and Kaepernick is strongest among millennial or Gen-Z, African-American, liberal urbanites — the group Nike targets. The company’s biggest risk is becoming “mainstream, the usual, everywhere, tamed”, Prof Lee says. Courting controversy forces its most dedicated fans to defend it and catches the eye of more neutral consumers.

Finally, Nike will have been encouraged by studies showing that consumers reward brands for speaking up on divisive social issues. But it is doing something more novel and calculated than other multinationals that have weighed in on immigration, gun control or race: it did not stumble into this controversy; it sought it.

A polarised populace is a fact of life for brands, in the US and beyond. That leaves them with a choice: try to carry on catering to a vanishing mass-market middle ground, or stake out a position that will infuriate one side but excite the other. The latter strategy has worked for politicians such as Mr Trump. Unlike elected officials, a brand can win with far less than 50.1 per cent of the population behind it. (Nike chief executive Mark Parker told investors last year that it was looking to just 12 global cities to drive 80 per cent of its growth.)

The info is here.

Should Artificial Intelligence Augment Medical Decision Making? The Case for an Autonomy Algorithm

Camillo Lamanna and Lauren Byrne
AMA J Ethics. 2018;20(9):E902-910.

Abstract

A significant proportion of elderly and psychiatric patients do not have the capacity to make health care decisions. We suggest that machine learning technologies could be harnessed to integrate data mined from electronic health records (EHRs) and social media in order to estimate the confidence of the prediction that a patient would consent to a given treatment. We call this process, which takes data about patients as input and derives a confidence estimate for a particular patient’s predicted health care-related decision as an output, the autonomy algorithm. We suggest that the proposed algorithm would result in more accurate predictions than existing methods, which are resource intensive and consider only small patient cohorts. This algorithm could become a valuable tool in medical decision-making processes, augmenting the capacity of all people to make health care decisions in difficult situations.

The article is here.

Thursday, October 4, 2018

7 Short-Term AI ethics questions

Orlando Torres
www.towardsdatascience.com
Originally posted April 4, 2018

Here is an excerpt:

2. Transparency of Algorithms

Even more worrying than the fact that companies won’t allow their algorithms to be publicly scrutinized, is the fact that some algorithms are obscure even to their creators.
Deep learning is a rapidly growing technique in machine learning that makes very good predictions, but is not really able to explain why it made any particular prediction.

For example, some algorithms haven been used to fire teachers, without being able to give them an explanation of why the model indicated they should be fired.

How can we we balance the need for more accurate algorithms with the need for transparency towards people who are being affected by these algorithms? If necessary, are we willing to sacrifice accuracy for transparency, as Europe’s new General Data Protection Regulation may do? If it’s true that humans are likely unaware of their true motives for acting, should we demand machines be better at this than we actually are?

3. Supremacy of Algorithms

A similar but slightly different concern emerges from the previous two issues. If we start trusting algorithms to make decisions, who will have the final word on important decisions? Will it be humans, or algorithms?

For example, some algorithms are already being used to determine prison sentences. Given that we know judges’ decisions are affected by their moods, some people may argue that judges should be replaced with “robojudges”. However, a ProPublica study found that one of these popular sentencing algorithms was highly biased against blacks. To find a “risk score”, the algorithm uses inputs about a defendant’s acquaintances that would never be accepted as traditional evidence.

The info is here.

Shouldn’t We Make It Easy to Use Behavioral Science for Good?

Manasee Desai
www.behavioralscientist.org
Originally posted September 4, 2018

The evidence showing that applied behavioral science is a powerful tool for creating social good is growing rapidly. As a result, it’s become much more common for the world’s problem solvers to apply a behavioral lens to their work. Yet this approach can still feel distant to the people trying urgently to improve lives on a daily basis—those working for governments, nonprofits, and other organizations that directly tackle some of the most challenging and pervasive problems facing us today.

All too often, effective strategies for change are either locked behind paywalls or buried in inaccessible, jargon-laden articles. And because of the sheer volume of behavioral solutions being tested now, even people working in the fields that compose the behavioral sciences—like me, for instance—cannot possibly stay on top of every new intervention or application happening across countless fields and countries. This means missed opportunities to apply and scale effective interventions and to do more good in the world.

As a field, figuring out how to effectively report and communicate what we’ve learned from our research and interventions is our own “last mile” problem.

While there is no silver bullet for the problems the world faces, the behavioral science community should (and can) come together to make our battle-tested solutions available to problem solvers, right at their fingertips. Expanding the adoption of behavioral design for social good requires freeing solutions from dense journals and cost-prohibitive paywalls. It also requires distilling complex designs into simpler steps—uniting a community that is passionate about social impact and making the world a better place with applied behavioral science.

That is the aim of the Behavioral Evidence Hub (B-Hub), a curated, open-source digital collection of behavioral interventions proven to impact real-world problems.

The info is here.

Wednesday, October 3, 2018

Association Between Physician Burnout and Patient Safety, Professionalism, and Patient Satisfaction: A Systematic Review and Meta-analysis.

Maria Panagioti, PhD; Keith Geraghty, PhD; Judith Johnson, PhD; et al
JAMA Intern Med. Published online September 4, 2018.
doi:10.1001/jamainternmed.2018.3713

Abstract

Importance  Physician burnout has taken the form of an epidemic that may affect core domains of health care delivery, including patient safety, quality of care, and patient satisfaction. However, this evidence has not been systematically quantified.

Objective  To examine whether physician burnout is associated with an increased risk of patient safety incidents, suboptimal care outcomes due to low professionalism, and lower patient satisfaction.

Data Sources  MEDLINE, EMBASE, PsycInfo, and CINAHL databases were searched until October 22, 2017, using combinations of the key terms physicians, burnout, and patient care. Detailed standardized searches with no language restriction were undertaken. The reference lists of eligible studies and other relevant systematic reviews were hand-searched.

Data Extraction and Synthesis  Two independent reviewers were involved. The main meta-analysis was followed by subgroup and sensitivity analyses. All analyses were performed using random-effects models. Formal tests for heterogeneity (I2) and publication bias were performed.

Main Outcomes and Measures  The core outcomes were the quantitative associations between burnout and patient safety, professionalism, and patient satisfaction reported as odds ratios (ORs) with their 95% CIs.

Results  Of the 5234 records identified, 47 studies on 42 473 physicians (25 059 [59.0%] men; median age, 38 years [range, 27-53 years]) were included in the meta-analysis. Physician burnout was associated with an increased risk of patient safety incidents (OR, 1.96; 95% CI, 1.59-2.40), poorer quality of care due to low professionalism (OR, 2.31; 95% CI, 1.87-2.85), and reduced patient satisfaction (OR, 2.28; 95% CI, 1.42-3.68). The heterogeneity was high and the study quality was low to moderate. The links between burnout and low professionalism were larger in residents and early-career (≤5 years post residency) physicians compared with middle- and late-career physicians (Cohen Q = 7.27; P = .003). The reporting method of patient safety incidents and professionalism (physician-reported vs system-recorded) significantly influenced the main results (Cohen Q = 8.14; P = .007).

Conclusions and Relevance  This meta-analysis provides evidence that physician burnout may jeopardize patient care; reversal of this risk has to be viewed as a fundamental health care policy goal across the globe. Health care organizations are encouraged to invest in efforts to improve physician wellness, particularly for early-career physicians. The methods of recording patient care quality and safety outcomes require improvements to concisely capture the outcome of burnout on the performance of health care organizations.

The research is here.