Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Judgment. Show all posts
Showing posts with label Judgment. Show all posts

Saturday, February 10, 2018

Could Biologically Enhancing Our Morality Save Our Species?

Julian Savulescu
Leapsmag.com
Originally published January 12, 2017

Here is an excerpt:

Our limitations have also become apparent in another form of existential threat: resource depletion. Despite our best efforts at educating, nudging, and legislating on climate change, carbon dioxide emissions in 2017 are expected to come in at the highest ever following a predicted rise of 2 percent. Why? We aren’t good at cooperating in larger groups where freeriding is not easily spotted. We also deal with problems in order of urgency. A problem close by is much more significant to us than a problem in the future. That’s why even if we accept there is a choice between economic recession now or natural disasters and potential famine in the future, we choose to carry on drilling for oil. And if the disasters and famine are present day, but geographically distant, we still choose to carry on drilling.

So what is our radical solution? We propose that there is a need for what we call moral bioenhancement. That is, for seeking a biological intervention that can help us overcome our evolved moral limitations. For example, adapting our biology so that we can appreciate the suffering of foreign or future people in the same instinctive way we do our friends and neighbors. Or, in the case of individuals, in addressing the problem of psychopathy from a biological perspective.

The information is here.

Sunday, February 4, 2018

Goldwater Rule: Red Line or Guideline?

Scott O. Lilienfeld, , Joshua D. Miller, Donald R. Lynam
Perspectives on Psychological Science 
Vol 13, Issue 1, pp. 33 - 35
First Published October 13, 2017

The decades following Miller’s (1969) call for psychological scientists to “give psychology away” have witnessed a growing recognition that we need to do more to communicate our knowledge to the general public (Kaslow, 2015; Lilienfeld, 2012). But should there be limits on the nature of this communication? The Goldwater Rule, which expressly forbids psychiatrists from commenting on the mental health of public figures whom they have not directly examined, answers this query in the affirmative; as we observed in our article (Lilienfeld, Miller, & Lynam, 2017), this rule has been de facto adopted by psychology.

We appreciate the opportunity to respond to two commentators who raise thoughtful qualifications and objections to our thesis, which holds that the Goldwater Rule is antiquated and premised on dubious scientific assumptions.  We are pleased that both scholars concur with us that the direct interview assumption—the principal empirical linchpin of the Goldwater Rule—is contradicted by large bodies of psychological research.

(cut to the conclusion)

Psychologists should typically refrain from proffering diagnostic judgments regarding public figures. Such judgments boost the risk of inaccurate ‘arm chair' diagnoses and of damaging the reputation of public figures and the profession at large.  At the same time, there is scant justification for a categorical ban on this practice, especially because psychologists can at times offer diagnostic information that bears to some degree on the question of individuals’ suitability for high public office.  We therefore recommend reformulating the 'Goldwater Rule” as the 'Goldwater Guideline.’  Such a change would underscore the wisdom of discretion with respect to statements concerning the diagnostic status of public figures but remind psychologists that such statements can be useful and even advisable within limits.

The article is here.

Friday, December 22, 2017

Professional Self-Care to Prevent Ethics Violations

Claire Zilber
The Ethical Professor
Originally published December 4, 2017

Here is an excerpt:

Although there are many variables that lead a professional to violate an ethics rule, one frequent contributing factor is impairment from stress caused by a family member's illness (sick child, dying parent, spouse's chronic health condition, etc.). Some health care providers who have been punished by their licensing board, hospital board or practice group for an ethics violation tell similar stories of being under unusual levels of stress because of a family member who was ill. In that context, they deviated from their usual behavior.

For example, a surgeon whose son was mentally ill prescribed psychotropic medications to him because he refused to go to a psychiatrist. This surgeon was entering into a dual relationship with her child and prescribing outside of her area of competence, but felt desperate to help her son. Another physician, deeply unsettled by his wife’s diagnosis with and treatment for breast cancer, had an extramarital affair with a nurse who was also his employee. This physician sought comfort without thinking about the boundaries he was violating at work, the risk he was creating for his practice, or the harm he was causing to his marriage.

Physicians cannot avoid stressful events at work and in their personal lives, but they can exert some control over how they adapt to or manage that stress. Physician self-care begins with self-awareness, which can be supported by such practices as mindfulness meditation, reflective writing, supervision, or psychotherapy. Self-awareness increases compassion for the self and for others, and reduces burnout.

The article is here.

Wednesday, October 18, 2017

Danny Kahneman on AI versus Humans


NBER Economics of AI Workshop 2017

Here is a rough translation of an excerpt:

One point made yesterday was the uniqueness of humans when it comes to evaluations. It was called “judgment”. Here in my noggin it’s “evaluation of outcomes”: the utility side of the decision function. I really don’t see why that should be reserved to humans.

I’d like to make the following argument:
  1. The main characteristic of people is that they’re very “noisy”.
  2. You show them the same stimulus twice, they don’t give you the same response twice.
  3. You show the same choice twice I mean—that’s why we had stochastic choice theory because thereis so much variability in people’s choices given the same stimuli.
  4. Now what can be done even without AI is a program that observes an individual that will be better than the individual and will make better choices for the individual by because it will be noise-free.
  5. We know from the literature that Colin cited on predictions an interesting tidbit:
  6. If you take clinicians and you have them predict some criterion a large number of time and then you develop a simple equation that predicts not the outcome but the clinicians judgment, that model does better in predicting the outcome then the clinician.
  7. That is fundamental.
This is telling you that one of the major limitations on human performance is not bias it is just noise.
I’m maybe partly responsible for this, but people now when they talk about error tend to think of bias as an explanation: the first thing that comes to mind. Well, there is bias. And it is an error. But in fact most of the errors that people make are better viewed as this random noise. And there’s an awful lot of it.

The entire transcript and target article is here.

Thursday, June 15, 2017

Act Versus Impact: Conservatives and Liberals Exhibit Different Structural Emphases in Moral Judgment

Ivar R. Hannikainen, M. Miller, A. Cushman
Ratio. (2017 )
doi:10.1111/rati.12162

Abstract

Conservatives and liberals disagree sharply on matters of morality and public policy. We propose a novel account of the psychological basis of these differences. Specifically, we find that conservatives tend to emphasize the intrinsic value of actions during moral judgment, in part by mentally simulating themselves performing those actions, while liberals instead emphasize the value of the expected outcomes of the action. We then demonstrate that a structural emphasis on actions is linked to the condemnation of victimless crimes, a distinctive feature of conservative morality. Next, we find that the conservative and liberal structural approaches to moral judgment are associated with their corresponding patterns of reliance on distinct moral foundations. In addition, the structural approach uniquely predicts that conservatives will be more opposed to harm in circumstances like the well-known trolley problem, a result which we replicate. Finally, we show that the structural approaches of conservatives and liberals are partly linked to underlying cognitive styles (intuitive versus deliberative). Collectively, these findings forge a link between two important yet previously independent lines of research in political psychology: cognitive style and moral foundations theory.

The article is here.

Sunday, February 19, 2017

Most People Consider Themselves to Be Morally Superior

By Cindi May
Scientific American
Originally published on January 31, 2017

Here are two excerpts:

This self-enhancement effect is most profound for moral characteristics. While we generally cast ourselves in a positive light relative to our peers, above all else we believe that we are more just, more trustworthy, more moral than others. This self-righteousness can be destructive because it reduces our willingness to cooperate or compromise, creates distance between ourselves and others, and can lead to intolerance or even violence. Feelings of moral superiority may play a role in political discord, social conflict, and even terrorism.

(cut)

So we believe ourselves to be more moral than others, and we make these judgments irrationally. What are the consequences? On the plus side, feelings of moral superiority could, in theory, protect our well-being. For example, there is danger in mistakenly believing that people are more trustworthy or loyal than they really are, and approaching others with moral skepticism may reduce the likelihood that we fall prey to a liar or a cheat. On the other hand, self-enhanced moral superiority could erode our own ethical behavior. Evidence from related studies suggests that self-perceptions of morality may “license” future immoral actions.

The article is here.

Friday, February 10, 2017

Dysfunction Disorder

Joaquin Sapien
Pro Publica
Originally published on January 17, 2017

Here is an excerpt:

The mental health professionals in both cases had been recruited by Montego Medical Consulting, a for-profit company under contract with New York City's child welfare agency. For more than a decade, Montego was paid hundreds of thousands of dollars a year by the city to produce thousands of evaluations in Family Court cases -- of mothers and fathers, spouses and children. Those evaluations were then shared with judges making decisions of enormous sensitivity and consequence: whether a child could stay at home or if they'd be safer in foster care; whether a parent should be enrolled in a counseling program or put on medication; whether parents should lose custody of their children altogether.

In 2012, a confidential review done at the behest of frustrated lawyers and delivered to the administrative judge of Family Court in New York City concluded that the work of the psychologists lined up by Montego was inadequate in nearly every way. The analysis matched roughly 25 Montego evaluations against 20 criteria from the American Psychological Association and other professional guidelines. None of the Montego reports met all 20 criteria. Some met as few as five. The psychologists used by Montego often didn't actually observe parents interacting with children. They used outdated or inappropriate tools for psychological assessments, including one known as a "projective drawing" exercise.

(cut)

Attorneys and psychologists who have worked in Family Court say judges lean heavily on assessments made by psychologists, often referred to as "forensic evaluators." So do judges themselves.

"In many instances, judges rely on forensic evaluators more than perhaps they should," said Jody Adams, who served as a Family Court judge in New York City for nearly 20 years before leaving the bench in 2012. "They should have more confidence in their own insight and judgment. A forensic evaluator's evidence should be a piece of the judge's decision, but not determinative. These are unbelievably difficult decisions; these are not black and white; they are filled with gray areas and they have lifelong consequences for children and their families. So it's human nature to want to look for help where you can get it."

The article is here.

Monday, December 5, 2016

The Simple Economics of Machine Intelligence

Ajay Agrawal, Joshua Gans, and Avi Goldfarb
Harvard Business Review
Originally published November 17, 2016

Here are two excerpts:

The first effect of machine intelligence will be to lower the cost of goods and services that rely on prediction. This matters because prediction is an input to a host of activities including transportation, agriculture, healthcare, energy manufacturing, and retail.

When the cost of any input falls so precipitously, there are two other well-established economic implications. First, we will start using prediction to perform tasks where we previously didn’t. Second, the value of other things that complement prediction will rise.

(cut)

As machine intelligence improves, the value of human prediction skills will decrease because machine prediction will provide a cheaper and better substitute for human prediction, just as machines did for arithmetic. However, this does not spell doom for human jobs, as many experts suggest. That’s because the value of human judgment skills will increase. Using the language of economics, judgment is a complement to prediction and therefore when the cost of prediction falls demand for judgment rises. We’ll want more human judgment.

The article is here.

Tuesday, September 6, 2016

The Problem With Slow Motion

By Eugene Caruso, Zachary Burns & Benjamin Converse
The New York Times - Gray Matter
Originally published August 5, 2016

Here are two excerpts:

Watching slow-motion footage of an event can certainly improve our judgment of what happened. But can it also impair judgment?

(cut)

Those who saw the shooting in slow motion felt that the actor had more time to act than those who saw it at regular speed — and the more time they felt he had, the more likely they were to see intention in his action. (We found similar results in a separate study involving video footage of a prohibited “helmet to helmet” tackle in the National Football League, where the question was whether the player intended to strike the opposing player in the proscribed manner.)

The article is here.

Monday, August 29, 2016

Implicit bias is a challenge even for judges

Terry Carter
ABA Journal
Originally posted August 5, 2016

Judges are tasked with being the most impartial members of the legal profession. On Friday afternoon, more than 50 of them discussed how this isn’t so easy to do—and perhaps even impossible when it comes to implicit bias.

But working to overcome biases we don’t recognize is a job that is as necessary as it is worth doing.

“We view our job functions through the lens of our experiences, and all of us are impacted by biases and stereotypes and other cognitive functions that enable us to take shortcuts in what we do,” 6th U.S. Circuit Court of Appeals Judge Bernice B. Donald told a gathering of judges, state and federal, from around the country. Donald was on a panel for a program by the ABA’s Judicial Division, titled “Implicit Bias and De-Biasing Strategies: A Workshop for Judges and Lawyers,” at the association’s annual meeting in San Francisco.

The post is here.

Friday, May 20, 2016

Sleep Deprivation and Advice Taking

Jan Alexander Häusser, Johannes Leder, Charlene Ketturat, Martin Dresler & Nadira Sophie Faber
Scientific Reports 6, Article number: 24386 (2016)
doi:10.1038/srep24386

Abstract

Judgements and decisions in many political, economic or medical contexts are often made while sleep deprived. Furthermore, in such contexts individuals are required to integrate information provided by – more or less qualified – advisors. We asked if sleep deprivation affects advice taking. We conducted a 2 (sleep deprivation: yes vs. no) ×2 (competency of advisor: medium vs. high) experimental study to examine the effects of sleep deprivation on advice taking in an estimation task. We compared participants with one night of total sleep deprivation to participants with a night of regular sleep. Competency of advisor was manipulated within subjects. We found that sleep deprived participants show increased advice taking. An interaction of condition and competency of advisor and further post-hoc analyses revealed that this effect was more pronounced for the medium competency advisor compared to the high competency advisor. Furthermore, sleep deprived participants benefited more from an advisor of high competency in terms of stronger improvement in judgmental accuracy than well-rested participants.

The article is here.

Thursday, May 12, 2016

Harm Mediates the Disgust-Immorality Link

Chelsea Schein, Ryan Ritter, & Kurt Gray
Emotion, in press

Abstract

Many acts are disgusting, but only some of these acts are immoral. Dyadic morality predicts that
disgusting acts should be judged as immoral to the extent that they seem harmful. Consistent
with this prediction, three studies reveal that perceived harm mediates the link between feelings
of disgust and moral condemnation—even for ostensibly harmless “purity” violations. In many
cases, accounting for perceived harm completely eliminates the link between disgust and moral
condemnation. Analyses also reveal the predictive power of anger and typicality/weirdness in
moral judgments of disgusting acts. The mediation of disgust by harm holds across diverse acts
including gay marriage, sex acts, and religious blasphemy. Revealing the endogenous presence
and moral relevance of harm within disgusting-but-ostensibly-harmless acts argues against
modular accounts of moral cognition such as moral foundations theory. Instead, these data
support pluralistic conceptions of harm and constructionist accounts of morality and emotion.
Implications for moral cognition and the concept of “purity” are discussed.

The article is here.

Monday, December 28, 2015

The role of emotion in ethics and bioethics: dealing with repugnance and disgust

Mark Sheehan
J Med Ethics 2016;42:1-2
doi:10.1136/medethics-2015-103294

Here is an excerpt:

But what generally are we to say about the role of emotions in ethics and in ethical judgement? We tend to sharply distinguish ‘mere’ emotions or emotional responses from reasoned or rational argument. Clearly, it would seem, if we are to make claims about rightness or wrongness they should be on the basis of reasons and rational argument. Emotions look to be outside of this paradigm concerned as they are with our responses to the world rather than the world itself and the clear articulation of inferential relationships within it. Most importantly emotions are felt subjectively and so cannot lay any generalised claim on others (particularly others who do not feel as the arguer does). The subjectivity of emotions means that they cannot function in arguments because, unless they are universal, they cannot form the basis of a claim on another person. The reason they cannot form this basis is because that other person may not have that emotion: relying on it means the argument can only apply to those who do. An argument that relies on feeling particular emotions, particularly emotions that we don't all feel in the same way, is weak to that extent and certainly weaker than one that does not.

In the case at hand, repugnance or disgust only have persuasive power to those who feel these emotions in response to human reproductive cloning. If all people felt one or the other, then claims based on an appeal to repugnance or disgust would have persuasive power over all of us. But even if these were generally or commonly felt emotions here, such persuasive power would be distinct from an argument's having persuasive power over us because of the reasons it provides for us independently of contingently felt emotions. An argument then that is based on an appeal to emotion apparently as Kass' and Kekes' apparently are, can, at best, be only as strong as the generalisability of the empirical claim about the relevant emotion.

The article is here.

Sunday, November 8, 2015

Deconstructing the seductive allure of neuroscience explanations

Weisberg DS, Keil FC, Goodstein J, Rawson E, Gray JR.
Judgment and Decision Making, Vol. 10, No. 5, 
September 2015, pp. 429–441

Abstract

Explanations of psychological phenomena seem to generate more public interest when they contain neuroscientific information. Even irrelevant neuroscience information in an explanation of a psychological phenomenon may interfere with people's abilities to critically consider the underlying logic of this explanation. We tested this hypothesis by giving naïve adults, students in a neuroscience course, and neuroscience experts brief descriptions of psychological phenomena followed by one of four types of explanation, according to a 2 (good explanation vs. bad explanation) x 2 (without neuroscience vs. with neuroscience) design. Crucially, the neuroscience information was irrelevant to the logic of the explanation, as confirmed by the expert subjects. Subjects in all three groups judged good explanations as more satisfying than bad ones. But subjects in the two nonexpert groups additionally judged that explanations with logically irrelevant neuroscience information were more satisfying than explanations without. The neuroscience information had a particularly striking effect on nonexperts' judgments of bad explanations, masking otherwise salient problems in these explanations.

The entire article is here.

Thursday, November 5, 2015

The Funny Thing About Adversity

By David DeSteno
The New York Times
Originally published October 16, 2015

Here are several excerpts:

In both studies, the results were the same. Those who had faced increasingly severe adversities in life — loss of a loved one at an early age, threats of violence or the consequences of a natural disaster — were more likely to empathize with others in distress, and, as a result, feel more compassion for them. And of utmost importance, the more compassion they felt, the more money they donated (in the first study) or the more time they devoted to helping the other complete his work (in the second).

Now, if experiencing any type of hardship can make a person more compassionate, you might assume that the pinnacle of compassion would be reached when someone has experienced the exact trial or misfortune that another person is facing. Interestingly, this turns out to be dead wrong.

(cut)

As a result of this glitch, reflecting on your own past experience with a specific misfortune will very likely cause you to under appreciate just how trying that exact challenge can be for someone else (or was, in fact, for you at the time). You overcame it, you think; so should he. The result? You lack compassion.

The entire article is here.

Thursday, July 30, 2015

If obesity is a moral failing, then our morals have failed.

By Anke Snoek
Aeon Magazine - Ideas
Originally published July 6, 2015

Here is an excerpt:

But there’s another reason to be cautious about calling obesity a moral failing. The lay vision is that obese people act on their desires rather than on their better judgment, but recent research of Nora Volkow shows some striking parallels between addiction and obesity. Evolutionarily, we are wired to find certain foods and activities – the ones that contribute more to our survival – more attractive than others. That’s why when we engage in positive social relationships, sex, or eat food with high fat, sugar or salt content, dopamine is released in the brain. Dopamine is often associated with pleasure. We get a pleasurable feeling when we eat good food, but dopamine also contributes to conditioned learning and so-called incentive sensitization. That is, we become sensitive to cues linked to rewarding behaviour or food which was important but scarce in the distant past.  In prehistoric times we learned which cues predict, for instance, where the best fruit trees grow.

The entire article is here.

Thursday, July 23, 2015

Common medications sway moral judgment

By Kelly Servick
Science Magazine
Originally published July 2, 2015

Here is an excerpt:

The researchers could then calculate the “exchange rate between money and pain”—how much extra cash a person must be paid to accept one additional shock. In previous research, Crockett’s team learned that the exchange rate varies depending on who gets hurt. On average, people are more reluctant to profit from someone else’s pain than their own—a phenomenon the researchers call “hyperaltruism.”

In the new study, the scientists tested whether drugs can shift that pain-to-money exchange rate. A few hours before the test, they gave the subjects either a placebo pill or one of two drugs: the serotonin-enhancing antidepressant drug citalopram or the Parkinson’s treatment levodopa, which increases dopamine levels.

On average, people receiving the placebo were willing to forfeit about 55 cents per shock to avoid harming themselves, and 69 cents to avoid harming others. Those amounts nearly doubled in people who took citalopram: They were generally more averse to causing harm, but still preferred profiting from their own pain over another’s, Crockett’s team reports online today in Current Biology. Levodopa had a different effect: It seemed to make people just as willing to shock others as themselves for profit.

The entire article is here.

Wednesday, July 22, 2015

Bias Blind Spot: Structure, Measurement, and Consequences

Irene Scopelliti, Carey K. Morewedge, Erin McCormick, H. Lauren Min, Sophie Lebrecht, Karim S. Kassam (2015)
Bias Blind Spot: Structure, Measurement, and Consequences. Management Science
Published online in Articles in Advance 24 Apr 2015
http://dx.doi.org/10.1287/mnsc.2014.2096

Abstract

People exhibit a bias blind spot: they are less likely to detect bias in themselves than in others. We report the development and validation of an instrument to measure individual differences in the propensity to exhibit the bias blind spot that is unidimensional, internally consistent, has high test-retest reliability, and is discriminated from measures of intelligence, decision-making ability, and personality traits related to self-esteem, self-enhancement, and self-presentation. The scale is predictive of the extent to which people judge their abilities to be better than average for easy tasks and worse than average for difficult tasks, ignore the advice of others, and are responsive to an intervention designed to mitigate a different judgmental bias. These results suggest that the bias blind spot is a distinct metabias resulting from naïve realism rather than other forms of egocentric
cognition, and has unique effects on judgment and behavior.

The entire article is here.

Monday, July 13, 2015

How national security gave birth to bioethics

By Jonathan D. Moreno
The Conversation
Originally posted June 8, 2015

Here is an excerpt:

Ironically, while the experiments in Guatemala were going on in the late 1940s, three American judges were hearing the arguments in a war crimes trial in Germany. Twenty-three Nazi doctors and bureaucrats were accused of horrific experiments on people in concentration camps.

The judges decided they needed to make the rules around human experiments clear, so as part of their decision they wrote what has come to be known as the Nuremberg Code. The code states that “the voluntary consent of the human subject is absolutely essential.”

The Guatemala experiments clearly violated that code. President Obama’s commission found that the US public health officials knew what they were doing was unethical, so they kept it quiet. Years later, one of those doctors had a key role in the infamous syphilis experiments in Tuskegee, Alabama that studied the progression of untreated syphilis. None of the 600 men enrolled in the experiments was told if he had syphilis or not. No one with the disease was offered penicillin, the treatment of choice for syphilis. The 40-year experiment finally ended in 1972.

The entire article is here.

Friday, June 12, 2015

Confirmation Bias and the Limits of Human Knowledge

By Peter Wehner
Commentary Magazine
Originally published May 27, 2015

Here is an excerpt:

Confirmation bias is something we can easily identify in others but find very difficult to detect in ourselves. (If you finish this piece thinking only of the blindness of those who disagree with you, you are proving my point.) And while some people are far more prone to it than others, it’s something none of us is fully free of. We all hold certain philosophical assumptions, whether we’re fully aware of them or not, and they create a prism through which we interpret events. Often those assumptions are not arrived at through empiricism; they are grounded in moral intuitions. And moral intuitions, while not sub-rational, are shaped by things other than facts and figures. “The heart has its reasons which reason itself does not know,” Pascal wrote. And often the heart is right.

Without such core intuitions, we could not hope to make sense of the world. But these intuitions do not stay broad and implicit: we use them to make concrete judgments in life. The consequences of those judgments offer real-world tests of our assumptions, and if we refuse to learn from the results then we have no hope of improving our judgments in the future.

The entire article is here.