Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, July 31, 2016

Neural mechanisms underlying the impact of daylong cognitive work on economic decisions

Bastien Blain, Guillaume Hollard, and Mathias Pessiglione
PNAS 2016 113 (25) 6967-6972

Abstract

The ability to exert self-control is key to social insertion and professional success. An influential literature in psychology has developed the theory that self-control relies on a limited common resource, so that fatigue effects might carry over from one task to the next. However, the biological nature of the putative limited resource and the existence of carry-over effects have been matters of considerable controversy. Here, we targeted the activity of the lateral prefrontal cortex (LPFC) as a common substrate for cognitive control, and we prolonged the time scale of fatigue induction by an order of magnitude. Participants performed executive control tasks known to recruit the LPFC (working memory and task-switching) over more than 6 h (an approximate workday). Fatigue effects were probed regularly by measuring impulsivity in intertemporal choices, i.e., the propensity to favor immediate rewards, which has been found to increase under LPFC inhibition. Behavioral data showed that choice impulsivity increased in a group of participants who performed hard versions of executive tasks but not in control groups who performed easy versions or enjoyed some leisure time. Functional MRI data acquired at the start, middle, and end of the day confirmed that enhancement of choice impulsivity was related to a specific decrease in the activity of an LPFC region (in the left middle frontal gyrus) that was recruited by both executive and choice tasks. Our findings demonstrate a concept of focused neural fatigue that might be naturally induced in real-life situations and have important repercussions on economic decisions.

Significance

In evolved species, resisting the temptation of immediate rewards is a critical ability for the achievement of long-term goals. This self-control ability was found to rely on the lateral prefrontal cortex (LPFC), which also is involved in executive control processes such as working memory or task switching. Here we show that self-control capacity can be altered in healthy humans at the time scale of a workday, by performing difficult executive control tasks. This fatigue effect manifested in choice impulsivity was linked to reduced excitability of the LPFC following its intensive utilization over the day. Our findings might have implications for designing management strategies that would prevent daylong cognitive work from biasing economic decisions.

The research is here.


Saturday, July 30, 2016

Sexual abuse by doctors sometimes goes unpunished

Associated Press
Originally published July 6, 2016

Sexual abuse by doctors against patients is surprisingly widespread, yet the fragmented medical oversight system shrouds offenders' actions in secrecy and allows many to continue to treat patients, an investigation by The Atlanta Journal-Constitution has found.

The AJC obtained and analyzed more than 100,000 disciplinary orders against doctors since 1999. Among those, the newspaper identified more than 3,100 doctors sanctioned after being accused of sexual misconduct. More than 2,400 of the doctors had violations involving patients. Of those, half still have active medical licenses today, the newspaper found.

These cases represent only a fraction of incidences in which doctors have been accused of sexually abusing patients. Many remain obscured, the newspaper said, because state regulators and hospitals sometimes handle sexual misconduct cases in secret. Also, some public records are so vaguely worded that patients would not be aware that a sexual offense occurred.

The article is here.

Friday, July 29, 2016

When Doctors Have Conflicts of Interest

By Mikkael A. Sekeres
The New York Times - Well Blog
Originally posted June 29, 2016

Here is an excerpt:

What if, instead, the drug for which she provided advice is already commercially available. How much is her likelihood of prescribing this medication – what we call a conflict of commitment – influenced by her having been given an honorarium by the manufacturer for her advice about this or another drug made by the same company?

We know already that doctors are influenced in their prescribing patterns even by tchotchkes like pens or free lunches. One recent study of almost 280,000 physicians who received over 63,000 payments, most of which were in the form of free meals worth under $20, showed that these doctors were more likely to prescribe the blood pressure, cholesterol or antidepressant medication promoted as part of that meal than other medications in the same class of drugs. Are these incentives really enough to encroach on our sworn obligation to do what’s best for our patients, irrespective of outside influences? Perhaps, and that’s the reason many hospitals ban them.

In both scenarios the doctor should, at the very least, have to disclose the conflict to patients, either on a website, where patients could easily view it, or by informing them directly, as my mother-in-law’s doctor did to her.

The article is here.

Doctors disagree about the ethics of treating friends and family

By Elisabeth Tracey
The Pulse
Originally published July 1, 2016

Here is an excerpt:

Gold says the guidelines are in place for good reason. One concern is that a physician may have inappropriate emotional investment in the care of a friend or family member.

"It may cloud your ability to make a good judgment, so you might treat them differently than you would treat a patient in your office," Gold says. "For example you might order extra tests for the family member that you wouldn't order for someone else."

Physicians may also avoid broaching uncomfortable topics with someone they know personally.

"Sometimes we're talking about sensitive issues," says Gold. "If someone has a sexually transmitted disease, it's very awkward with a family member to go into a lot of detail with them... even though with a patient you would have those discussions."

The article is here.

Thursday, July 28, 2016

Driverless Cars: Can There Be a Moral Algorithm?

By Daniel Callahan
The Hastings Center
Originally posted July 5, 2016

Here is an excerpt:

The surveys also showed a serious tension between reducing pedestrians deaths while maximizing the driver’s personal protection. Drivers will want the latter, but regulators might come out on the utilitarian side, reducing harm to others. The researchers conclude by saying that a “moral algorithm” to take account of all these variation is needed, and that they “will need to tackle more intricate decisions than those considered in our survey.” As if there were not enough already.

Just who is to do the tackling? And how can an algorithm of that kind be created?  Joshua Greene has a decisive answer to those questions: “moral philosophers.” Speaking as a member of that tribe, I feel flattered. He does, however, get off on the wrong diplomatic foot by saying that “software engineers–unlike politicians, philosophers, and opinionated uncles—don’t have the luxury of vague abstractions.” He goes on to set a high bar to jump. The need is for “moral theories or training criteria sufficiently precise to determine exactly which rights people have, what virtue requires, and what tradeoffs are just.” Exactly!

I confess up front that I don’t think we can do it.  Maybe people in Greene’s professional tribe turn out exact algorithms with every dilemma they encounter.  If so, we envy them for having all the traits of software engineers.  No such luck for us. We will muddle through on these issues as we have always done—muddle through because exactness is rare (and its claimants suspect), because the variables will all change over time, and because there is varied a set of actors (drivers, manufacturers, purchasers, and insurers) each with different interests and values.

The article is here.

We live in a culture of mental health haves and have nots

Naomi Freundlich
KevinMD.com
Originally published July 4, 2016

Here is an excerpt:

Let’s start with enforcement. Multiple agencies oversee compliance with the parity laws, including state insurance boards, Medicaid, HHS or the Department of Labor, depending on how and where an individual is insured. Figuring out who to contact when there’s been a violation of parity laws can be difficult, especially when people are experiencing mental health problems.

Furthermore, although obvious discrepancies between behavioral and medical coverage are not all that common, according to Kaiser Health News, many insurers have figured out how to limit mental health costs through more subtle strategies that are harder to track. These include frequent and rigorous utilization review and so-called “fail first” therapies that require providers to try the least expensive therapies first even if they might not be the most effective. The KHN authors note, “Among the more murky areas is ‘medical necessity’ review — in which insurers decide whether a patient requires a certain treatment and at what frequency.”

A survey conducted by the National Alliance on Mental Illness found that patients were twice as likely to be denied mental health care (29 percent) based on “medical necessity” review than other medical care (14 percent).

The article is here.

Wednesday, July 27, 2016

Research fraud: the temptation to lie – and the challenges of regulation

Ian Freckelton
The Conversation
Originally published July 5, 2016

Most scientists and medical researchers behave ethically. However, in recent years, the number of high-profile scandals in which researchers have been exposed as having falsified their data raises the issue of how we should deal with research fraud.

There is little scholarship on this subject that crosses disciplines and engages with the broader phenomenon of unethical behaviour within the domain of research.

This is partly because disciplines tend to operate in their silos and because universities, in which researchers are often employed, tend to minimise adverse publicity.

When scandals erupt, embarrassment in a particular field is experienced for a short while – and researchers may leave their university. But few articles are published in scholarly journals about how the research fraud was perpetrated; how it went unnoticed for a significant period of time; and how prevalent the issue is.

The article is here.

Doctors have become less empathetic, but is it their fault?

By David Scales
Aeon Magazine
Originally posted July 4, 2016

Here is an excerpt:

The key resides in the nature of clinical empathy, which requires that the practitioner be truly present. That medical professional must be curious enough to cognitively and emotionally relate to a patient’s situation, perspective and feelings, and then communicate this understanding back to the patient.

At times, empathy’s impact seems more magical than biological. When empathy scores are higher, patients recover faster from the common cold, diabetics have better blood-sugar control, people adhere more closely to treatment regimens, and patients feel more enabled to tackle their illnesses. Empathetic physicians report higher personal wellbeing and are sued less often.

If the case for empathy is clear, the way to boost it remains murky indeed. New research shows that meditation and ‘mindful communication’ can increase a physician’s empathy, spawning a niche industry of training courses. Yet this preoccupation has missed the glaring deficits in the work environment, which squelch the human empathy that doctors possess.

The article is here.

Tuesday, July 26, 2016

The Paradox of Disclosure

By Sunita Sah
The New York Times
Originally published July 8, 2016

Here is an excerpt:

To some extent, they do work. Disclosing a conflict of interest — for example, a financial adviser’s commission or a physician’s referral fee for enrolling patients into clinical trials — often reduces trust in the advice.

But my research has found that people are still more likely to follow this advice because the disclosure creates increased pressure to follow the adviser’s recommendation. It turns out that people don’t want to signal distrust to their adviser or insinuate that the adviser is biased, and they also feel pressure to help satisfy their adviser’s self-interest. Instead of functioning as a warning, disclosure can become a burden on advisees, increasing pressure to take advice they now trust less.

The article is here.


How Large Is the Role of Emotion in Judgments of Moral Dilemmas?

Horne Z, Powell D (2016)
PLoS ONE 11(7): e0154780.
doi: 10.1371/journal.pone.0154780

Abstract

Moral dilemmas often pose dramatic and gut-wrenching emotional choices. It is now widely
accepted that emotions are not simply experienced alongside people’s judgments about
moral dilemmas, but that our affective processes play a central role in determining those
judgments. However, much of the evidence purporting to demonstrate the connection
between people’s emotional responses and their judgments about moral dilemmas has
recently been called into question. In the present studies, we reexamined the role of emotion
in people’s judgments about moral dilemmas using a validated self-report measure of
emotion. We measured participants’ specific emotional responses to moral dilemmas and,
although we found that moral dilemmas evoked strong emotional responses, we found that
these responses were only weakly correlated with participants’ moral judgments. We argue
that the purportedly strong connection between emotion and judgments of moral dilemmas
may have been overestimated.

The article is here.

Monday, July 25, 2016

Enhancement as Nothing More than Advantageous Bodily and Mental States

by Hazem Zohny
BMJ Blogs
Originally posted May 20, 2016

Some bodily and mental states are advantageous: a strong immune system, a sharp mind, strength.  These are advantageous precisely because, in most contexts, they are likely to increase your chances of leading a good life.  In contrast, disadvantageous states – e.g. the loss of a limb, a sense, or the ability to recall things – are likely to diminish those chances.

One way to think about enhancement and disability is in such welfarist terms.  A disability is no more than a disadvantageous bodily or mental state, while to undergo an enhancement is to change that state into a more advantageous one – that is, one that is more conducive to your well-being.  This would hugely expand the scope of what is considered disabling or enhancing.  For instance, there may be all kinds of real and hypothetical things you could change about your body and mind that would (at least potentially) be advantageous: you could mend a broken arm or stop a tumour from spreading, but you could also vastly sharpen your senses, take a drug that makes you more likeable, stop your body from expiring before the age of 100, or even change the scent of your intestinal gases to a rosy fragrance.

The article is here.

Consciousness: The Mind Messing With the Mind

By George Johnson
The New York Times
Originally published July 4, 2016

Here is an excerpt:

Michael Graziano, a neuroscientist at Princeton University, suggested to the audience that consciousness is a kind of con game the brain plays with itself. The brain is a computer that evolved to simulate the outside world. Among its internal models is a simulation of itself — a crude approximation of its own neurological processes.

The result is an illusion. Instead of neurons and synapses, we sense a ghostly presence — a self — inside the head. But it’s all just data processing.

“The machine mistakenly thinks it has magic inside it,” Dr. Graziano said. And it calls the magic consciousness.

It’s not the existence of this inner voice he finds mysterious. “The phenomenon to explain,” he said, “is why the brain, as a machine, insists it has this property that is nonphysical.”

The article is here.

Sunday, July 24, 2016

Nation’s psychiatric bed count falls to record low

By Lateshia Beachum
The Washington Post
Originally published July 1, 2016

The number of psychiatric beds in state hospitals has dropped to a historic low, and nearly half of the beds that are available are filled with patients from the criminal justice system.

Both statistics, reported in a new national study, reflect the sweeping changes that have taken place in the half-century since the United States began deinstitutionalizing mental illness in favor of outpatient treatment. But the promise of that shift was never fulfilled, and experts and advocates say the result is seen even today in the increasing ranks of homeless and incarcerated Americans suffering from serious mental conditions.

The article is here.

Saturday, July 23, 2016

Four Ways Your Leadership May Be Encouraging Unethical Behavior

Ron Carucci
Forbes.com
Originally published June 14, 2016

Most leaders would claim they want the utmost ethical standards upheld by those they lead. But they might be shocked to discover that, even with the best of intentions, their own leadership may be corrupting the choices of those they lead.

(cut)

1. You are making it psychologically unsafe to speak up. Despite saying things like, “I have an open door policy,” where employees can express even controversial issues, some leadership actions may dissuade the courage needed to raise ethical concerns . Creating a culture in which people freely speak up is vital to ensuring people don’t collude with, or incite misconduct.

(cut)

2. You are applying excessive pressure to reach unrealistic performance targets. Significant research suggests that unfettered goal setting can encourage people to make compromising choices in order to reach targets, especially if those targets seem unrealistic. Leaders may be inviting people to cheat in two ways. They will cut corners on the way they reach a goal, or they will lie when reporting how much of the goal they actually achieved.

The article is here.

Friday, July 22, 2016

Medical involvement in torture today?

Kenneth Boyd
J Med Ethics 2016;42:411-412 doi:10.1136/medethics-2016-103737

In the ethics classroom, medical involvement in torture is often discussed in terms of what happens or has happened elsewhere, in some imagined country far away, under a military dictatorship for example, or historically in Nazi Germany or Stalin's Russia. In these contexts, at a distance in space or time, the healthcare professional's moral dilemma can be clearly demonstrated. On the one hand, any involvement whatever in the practice of torture, countenancing or condoning as well as participating, is forbidden, formally by the World Medical Association 1957 Declaration of Tokyo, but more generally by the professional duty to do no harm. On the other hand, the professional duty of care, and more generally human decency and compassion, forbids standing idly by when no other professional with comparable skills is available to relieve the suffering of victims of torture. In such circumstances, the health professional's impulse to exercise their duty of care, albeit thereby implicitly countenancing or condoning torture, may be strengthened by the knowledge that to refuse may put their own life or that of a member of their family in danger. But then again, they may also be all too aware that in exercising their duty of care they may simply be ‘patching up’ the victims in order for them to be tortured again.

The article is here.

What This White-Collar Felon Can Teach You About Your Temptation To Cross That Ethical Line

Ron Carucci
Forbes.com
Originally posted June 28, 2016

The sobering truth of Law Professor Donald Langevoort’s words silenced the room like a loud mic-drop: “We’re not as ethical as we think we are.” Participants at Ethical Systems recent Ethics By Design conference were visibly uncomfortable…because they all knew it was true.

Research strongly indicates people over-estimate how strong their ethics are. I wanted to learn more about why genuinely honest people can be lured to cross lines they surely would have predicted, “I would never do that!”

The article is here.

Thursday, July 21, 2016

Frankenstein’s paperclips

The Economist
Originally posted June 25, 2016

Here is an excerpt:

AI researchers point to several technical reasons why fear of AI is overblown, at least in its current form. First, intelligence is not the same as sentience or consciousness, says Mr Ng, though all three concepts are commonly elided. The idea that machines will “one day wake up and change their minds about what they will do” is just not realistic, says Francesca Rossi, who works on the ethics of AI at IBM. Second, an “intelligence explosion” is considered unlikely, because it would require an AI to make each version of itself in less time than the previous version as its intelligence grows. Yet most computing problems, even much simpler ones than designing an AI, take much longer as you scale them up.

Third, although machines can learn from their past experiences or environments, they are not learning all the time.

The article is here.

March of the machines

The Economist
Originally published June 25, 2016

Here is an excerpt:

After many false dawns, AI has made extraordinary progress in the past few years, thanks to a versatile technique called “deep learning”. Given enough data, large (or “deep”) neural networks, modelled on the brain’s architecture, can be trained to do all kinds of things. They power Google’s search engine, Facebook’s automatic photo tagging, Apple’s voice assistant, Amazon’s shopping recommendations and Tesla’s self-driving cars. But this rapid progress has also led to concerns about safety and job losses. Stephen Hawking, Elon Musk and others wonder whether AI could get out of control, precipitating a sci-fi conflict between people and machines. Others worry that AI will cause widespread unemployment, by automating cognitive tasks that could previously be done only by people. After 200 years, the machinery question is back. It needs to be answered.

The article is here.

Wednesday, July 20, 2016

Fear and Loathing in Bioethics

Carl Elliott
Narrative Inquiry in Bioethics
Volume 6.1 (2016) 43–46

Abstract

As bioethicists have become medical insiders, they have had to struggle with a conflict between what their superiors expect of them and the demands of their conscience. Often they simply resign themselves to the conflict and work quietly within the system. But the machinery of the medical–industrial complex grinds up conscientious people because those people can see no remedies for injustice apart from the bureaucratic procedures prescribed by the machine itself. The answer to injustice is not a memorandum of understanding or a new strategic plan, but rather public resistance and solidarity.

The article is here.

An NYU Study Gone Wrong, and a Top Researcher Dismissed

By Benedict Carey
The New York Times
Originally posted June 27, 2016

New York University’s medical school has quietly shut down eight studies at its prominent psychiatric research center and parted ways with a top researcher after discovering a series of violations in a study of an experimental, mind-altering drug.

A subsequent federal investigation found lax oversight of study participants, most of whom had serious mental issues. The Food and Drug Administration investigators also found that records had been falsified and researchers had failed to keep accurate case histories.

In one of the shuttered studies, people with a diagnosis of post-traumatic stress caused by childhood abuse took a relatively untested drug intended to mimic the effects of marijuana, to see if it relieved symptoms.

The article is here.

Tuesday, July 19, 2016

Who Blames the Victim?

Laura Niemi and Liane Young
Gray Matter - The New York Times
Originally published June 24, 2016

Here is an excerpt:

Victim blaming appears to be deep-seated, rooted in core moral values, but also somewhat malleable, susceptible to subtle changes in language. For those looking to increase sympathy for victims, a practical first step may be to change how we talk: Focusing less on victims and more on perpetrators — “Why did he think he had license to rape?” rather than “Imagine what she must be going through” — may be a more effective way of serving justice.

The article is here.

When and Why We See Victims as Responsible: The Impact of Ideology on Attitudes Toward Victims

Laura Niemi and Liane Young
Pers Soc Psychol Bull June 23, 2016

Abstract

Why do victims sometimes receive sympathy for their suffering and at other times scorn and blame? Here we show a powerful role for moral values in attitudes toward victims. We measured moral values associated with unconditionally prohibiting harm (“individualizing values”) versus moral values associated with prohibiting behavior that destabilizes groups and relationships (“binding values”: loyalty, obedience to authority, and purity). Increased endorsement of binding values predicted increased ratings of victims as contaminated (Studies 1-4); increased blame and responsibility attributed to victims, increased perceptions of victims’ (versus perpetrators’) behaviors as contributing to the outcome, and decreased focus on perpetrators (Studies 2-3). Patterns persisted controlling for politics, just world beliefs, and right-wing authoritarianism. Experimentally manipulating linguistic focus off of victims and onto perpetrators reduced victim blame. Both binding values and focus modulated victim blame through victim responsibility attributions. Findings indicate the important role of ideology in attitudes toward victims via effects on responsibility attribution.

The article is here.

Monday, July 18, 2016

How Language ‘Framing’ Influences Decision-Making

Observations
Association for Psychological Science
Published in 2016

The way information is presented, or “framed,” when people are confronted with a situation can influence decision-making. To study framing, people often use the “Asian Disease Problem.” In this problem, people are faced with an imaginary outbreak of an exotic disease and asked to choose how they will address the issue. When the problem is framed in terms of lives saved (or “gains”), people are given the choice of selecting:
Medicine A, where 200 out of 600 people will be saved
or
Medicine B, where there is a one-third probability that 600 people will be saved and a two-thirds probability that no one will be saved.
When the problem is framed in terms of lives lost (or “losses”), people are given the option of selecting:
Medicine A, where 400 out of 600 people will die
or
Medicine B, where there is a one-third probability that no one will die and a two-thirds probability that 600 people will die.
Although in both problems Medicine A and Medicine B lead to the same outcomes, people are more likely to choose Medicine A when the problem is presented in terms of gains and to choose Medicine B when the problem is presented in terms of losses. This difference occurs because people tend to be risk averse when the problem is presented in terms of gains, but risk tolerant when it is presented in terms of losses.

The article is here.

Cooperation, Fast and Slow: Meta-Analytic Evidence for a Theory of Social Heuristics and Self-Interested Deliberation

David G. Rand
(In press).
Psychological Science.

Abstract

Does cooperating require the inhibition of selfish urges? Or does “rational” self-interest constrain cooperative impulses? I investigated the role of intuition and deliberation in cooperation by meta-analyzing 67 studies in which cognitive-processing manipulations were applied to economic cooperation games (total N = 17,647; no indication of publication bias using Egger’s test, Begg’s test, or p-curve). My meta-analysis was guided by the Social Heuristics Hypothesis, which proposes that intuition favors behavior that typically maximizes payoffs, whereas deliberation favors behavior that maximizes one’s payoff in the current situation. Therefore, this theory predicts that deliberation will undermine pure cooperation (i.e., cooperation in settings where there are few future consequences for one’s actions, such that cooperating is never in one’s self-interest) but not strategic cooperation (i.e., cooperation in settings where cooperating can maximize one’s payoff). As predicted, the meta-analysis revealed 17.3% more pure cooperation when intuition was promoted relative to deliberation, but no significant difference in strategic cooperation between intuitive and deliberation conditions.

The article is here.

Sunday, July 17, 2016

AI, Transhumanism, Merging with Superintelligence + Singularity Explained

Hosted by Michael Parker
The Antidote: TheLipTV2
Originally published on Mar 2, 2015

Artificial Intelligence, the possibility of merging consciousness with computers, and singularity are discussed in this mind expanding conversation with Dr. Susan Schneider. Are we prepared to face the implications of the success of our own technological innovations? Is the universe teeming with postbiological super Artificial Intelligence? Can silicon based entities bond with carbon based lifeforms? Explore the philosophical questions of superintelligence on the Antidote, hosted by Michael Parker.


Saturday, July 16, 2016

Federal panel approves first test of CRISPR editing in humans

By Laurie McGinley
The Washington Post
Originally posted on June 21, 2016

A National Institutes of Health advisory panel on Tuesday approved the first human use of the gene-editing technology CRISPR, for a study designed to target three types of cancer and funded by tech billionaire Sean Parker’s new cancer institute.

The experiment, proposed by researchers at the University of Pennsylvania, would use CRISPR-Cas9 technology to modify patients’ own T cells to make them more effective in attacking melanoma, multiple myeloma and sarcoma.

The federal Recombinant DNA Advisory Committee approved the Penn proposal unanimously, with one member abstaining. The experiment still must be approved by the Food and Drug Administration, which regulates clinical trials.

The article is here.

Friday, July 15, 2016

CIA Psychologists Admit Role In ‘Enhanced Interrogation’ Program In Court Filing

Jessica Schulberg
The Huffington Post
Originally posted June 22, 2016

Two psychologists who helped the CIA develop and execute its now-defunct “enhanced interrogation” program partially admitted for the first time to roles in what is broadly acknowledged to have been torture.

In a 30-page court filing posted Tuesday evening, psychologists James Mitchell and Bruce Jessen responded to nearly 200 allegations and legal justifications put forth by the American Civil Liberties Union in a complaint filed in October. The psychologists broadly denied allegations that “they committed torture, cruel, inhuman and degrading treatment, non-consensual human experimentation and/or war crimes” — but admitted to a series of actions that can only be described as such.

“Defendants admit that over a period of time, they administered to [Abu] Zubaydah walling, facial and abdominal slaps, facial holds, sleep deprivation, and waterboarding, and placed Zubaydah in cramped confinement,” the filing says.

The article is here.

Thursday, July 14, 2016

Psychologists admit harsh treatment of CIA prisoners but deny torture

By Nicholas K. Geranios
The Associated Press
Originally published June 22, 2016

Two former Air Force psychologists who helped design the CIA’s enhanced interrogation techniques for terrorism suspects acknowledge using waterboarding and other harsh tactics but deny allegations of torture and war crimes leveled by a civil-liberties group, according to new court records.

The American Civil Liberties Union (ACLU) sued consultants James E. Mitchell and John “Bruce” Jessen of Washington state last October on behalf of three former CIA prisoners, including one who died, creating a closely watched case that will likely include classified information.

In response, the pair’s attorneys filed documents this week in which Mitchell and Jessen acknowledge using waterboarding, loud music, confinement, slapping and other harsh methods but refute that they were torture.

“Defendants deny that they committed torture, cruel, inhuman and degrading treatment, nonconsensual human experimentation and/or war crimes,” their lawyers wrote, asking a federal judge in Spokane to throw out the lawsuit and award them court costs.

The article is here.

At the Heart of Morality Lies Neuro-Visceral Integration: Lower Cardiac Vagal Tone Predicts Utilitarian Moral Judgment

Gewnhi Park, Andreas Kappes, Yeojin Rho, and Jay J. Van Bavel
Soc Cogn Affect Neurosci first published online June 17, 2016
doi:10.1093/scan/nsw077

Abstract

To not harm others is widely considered the most basic element of human morality. The aversion to harm others can be either rooted in the outcomes of an action (utilitarianism) or reactions to the action itself (deontology). We speculated that human moral judgments rely on the integration of neural computations of harm and visceral reactions. The present research examined whether utilitarian or deontological aspects of moral judgment are associated with cardiac vagal tone, a physiological proxy for neuro-visceral integration. We investigated the relationship between cardiac vagal tone and moral judgment by using a mix of moral dilemmas, mathematical modeling, and psychophysiological measures. An index of bipolar deontology-utilitarianism was correlated with resting heart rate variability—an index of cardiac vagal tone—such that more utilitarian judgments were associated with lower heart rate variability. Follow-up analyses using process dissociation, which independently quantifies utilitarian and deontological moral inclinations, provided further evidence that utilitarian (but not deontological) judgments were associated with lower heart rate variability. Our results suggest that the functional integration of neural and visceral systems during moral judgments can restrict outcome-based, utilitarian moral preferences. Implications for theories of moral judgment are discussed.

A copy of the paper is here.

Wednesday, July 13, 2016

In Wisconsin, a Backlash Against Using Data to Foretell Defendants’ Futures

By Mitch Smith
The New York Times
Originally published June 23, 2016

Here is an excerpt:

Compas is an algorithm developed by a private company, Northpointe Inc., that calculates the likelihood of someone committing another crime and suggests what kind of supervision a defendant should receive in prison. The results come from a survey of the defendant and information about his or her past conduct. Compas assessments are a data-driven complement to the written presentencing reports long compiled by law enforcement agencies.

Company officials say the algorithm’s results are backed by research, but they are tight-lipped about its details. They do acknowledge that men and women receive different assessments, as do juveniles, but the factors considered and the weight given to each are kept secret.

“The key to our product is the algorithms, and they’re proprietary,” said Jeffrey Harmon, Northpointe’s general manager. “We’ve created them, and we don’t release them because it’s certainly a core piece of our business. It’s not about looking at the algorithms. It’s about looking at the outcomes.”

The article is here.

Does moral identity effectively predict moral behavior?: A meta-analysis

Steven G. Hertz and Tobias Krettenauer
Review of General Psychology, Vol 20(2), Jun 2016, 129-140.
http://dx.doi.org/10.1037/gpr0000062

Abstract

This meta-analysis examined the relationship between moral identity and moral behavior. It was based on 111 studies from a broad range of academic fields including business, developmental psychology and education, marketing, sociology, and sport sciences. Moral identity was found to be significantly associated with moral behavior (random effects model, r = .22, p < .01, 95% CI [.19, .25]). Effect sizes did not differ for behavioral outcomes (prosocial behavior, avoidance of antisocial behavior, ethical behavior). Studies that were entirely based on self-reports yielded larger effect sizes. In contrast, the smallest effect was found for studies that were based on implicit measures or used priming techniques to elicit moral identity. Moreover, a marginally significant effect of culture indicated that studies conducted in collectivistic cultures yielded lower effect sizes than studies from individualistic cultures. Overall, the meta-analysis provides support for the notion that moral identity strengthens individuals’ readiness to engage in prosocial and ethical behavior as well as to abstain from antisocial behavior. However, moral identity fares no better as a predictor of moral action than other psychological constructs.

And the conclusion...

Overall, three major conclusions can be drawn from this metaanalysis. First, considering all empirical evidence available it seems impossible to deny that moral identity positively predicts moral behavior in individuals from Western cultures. Although this finding does not refute research on moral hypocrisy, it put the claim that people want to appear moral, rather than be moral into perspective (Batson, 2011; Frimer et al., 2014). If this were always true, why would people who feel that morality matters to them engage more readily in moral action? Second, explicit self-report measures represent a valid and valuable approach to the moral identity construct. This is an important conclusion because many scholars feel that more effort should be invested into developing moral identity measures (e.g., Hardy & Carlo, 2011b; Jennings et al., 2015). Third, although moral identity positively predicts moral behavior the effect is not much stronger than the effects of other constructs, notably moral judgment or moral emotions. Thus, there is no reason to prioritize the moral identity construct as a predictor of moral action at the expense of other factors. Instead, it seems more appropriate to consider moral identity in a broader conceptual framework where it interacts with other personological and situational factors to bring about moral action. This approach is well underway in studies that investigate the moderating and mediating role of moral identity as a predictor of moral action (e.g., Aquino et al., 2007; Hardy et al., 2015). As part of this endeavor, it might become necessary to give up an overly homogenous notion of the moral identity construct in order to acknowledge that moral identities may consist of different motivations and goal orientations. Recently, Krettenauer and Casey (2015) provided evidence for two different types of moral identities, one that is primarily concerned with demonstrating morality to others, and one that is more inwardly defined by being consistent with one's values and beliefs. This differentiation has important ramifications for moral emotions and moral action and helps to explain why moral identities sometimes strengthen individuals' motivation to act morally and sometimes undermine it.

Tuesday, July 12, 2016

Canada Legalizes Physician-Assisted Dying

By Merrit Kennedy
NPR.org
Originally posted June 18, 2016

After weeks of debate, Canadian lawmakers have passed legislation to legalize physician-assisted death.

That makes Canada "one of the few nations where doctors can legally help sick people die," as Reuters reports.

The new law "limits the option to the incurably ill, requires medical approval and mandates a 15-day waiting period," as The Two-Way has reported.

The Canadian government introduced the bill in April and it passed a final Senate vote Friday. It includes strict criteria that patients must meet to obtain a doctor's help in dying.

The article is here.

Why Bioethics Needs a Disability Moral Psychology

Joseph A. Stramondo
Hastings Center Report
Volume 46, Issue 3, pages 22–30, May/June 2016

Abstract

The deeply entrenched, sometimes heated conflict between the disability movement and the profession of bioethics is well known and well documented. Critiques of prenatal diagnosis and selective abortion are probably the most salient and most sophisticated of disability studies scholars’ engagements with bioethics, but there are many other topics over which disability activists and scholars have encountered the field of bioethics in an adversarial way, including health care rationing, growth-attenuation interventions, assisted reproduction technology, and physician-assisted suicide.


The tension between the analyses of the disabilities studies scholars and mainstream bioethics is not merely a conflict between two insular political groups, however; it is, rather, also an encounter between those who have experienced disability and those who have not. This paper explores that idea. I maintain that it is a mistake to think of this conflict as arising just from a difference in ideology or political commitments because it represents a much deeper difference—one rooted in variations in how human beings perceive and reason about moral problems. These are what I will refer to as variations of moral psychology. The lived experiences of disability produce variations in moral psychology that are at the heart of the moral conflict between the disability movement and mainstream bioethics. I will illustrate this point by exploring how the disability movement and mainstream bioethics come into conflict when perceiving and analyzing the moral problem of physician-assisted suicide via the lens of the principle of respect for autonomy. To reconcile its contemporary and historical conflict with the disability movement, the field of bioethics must engage with and fully consider the two groups’ differences in moral perception and reasoning, not just the explicit moral and political arguments of the disability movement.

The article is here.

Monday, July 11, 2016

Facebook has a new process for discussing ethics. But is it ethical?

Anna Lauren Hoffman
The Guardian
Originally posted Friday 17 June 2016

Here is an excerpt:

Tellingly, Facebook’s descriptions of procedure and process offer little insight into the values and ideals that drive its decision-making. Instead, the authors offer vague, hollow and at times conflicting statements such as noting how its reviewers “consider how the research will improve our society, our community, and Facebook”.

This seemingly innocuous statement raises more ethical questions than it answers. What does Facebook think an “improved” society looks like? Who or what constitutes “our community?” What values inform their ideas of a better society?

Facebook sidesteps this completely by saying that ethical oversight necessarily involves subjectivity and a degree of discretion on the part of reviewers – yet simply noting that subjectivity is unavoidable does not negate the fact that explicit discussion of ethical values is important.

The article is here.

Ethical Considerations Prompt New Telemedicine Rules

American Medical Association
Press Release
Originally released June 13, 2016

With the increasing use of telemedicine and telehealth technologies, delegates at the 2016 AMA Annual Meeting adopted new policy that outlines ethical ground rules for physicians using these technologies to treat patients.

The guidelines

The policy, based on a report from the AMA Council on Ethical and Judicial Affairs, notes that while physicians’ fundamental ethical responsibilities don’t change when providing telemedicine, new technology has given rise to the need for further guidance.

“Telehealth and telemedicine are another stage in the ongoing evolution of new models for the delivery of care and patient-physician interactions,” AMA Board Member Jack Resneck, MD, said in a news release. “The new AMA ethical guidance notes that while new technologies and new models of care will continue to emerge, physicians’ fundamental ethical responsibilities do not change.”

The pressor is here.

Sunday, July 10, 2016

Deontology Or Trustworthiness?

A Conversation Between Molly Crockett, Daniel Kahneman
Edge.org
June 16, 2016

Here is an excerpt:

DANIEL KAHNEMAN:  Molly, you started your career as a neuroscientist, and you still are. Yet, much of the work that you do now is about moral judgment. What journey got you there?            

MOLLY CROCKETT:  I've always been interested in how we make decisions. In particular, why is it that the same person will sometimes make a decision that follows one set of principles or rules, and other times make a wildly different decision? These intra-individual variations in decision making have always fascinated me, specifically in the moral domain, but also in other kinds of decision making, more broadly.

I got interested in brain chemistry because this seemed to be a neural implementation or solution for how a person could be so different in their disposition across time, because we know brain chemistry is sensitive to aspects of the environment. I picked that methodology as a tool with which to study why our decisions can shift so much, even within the same person; morality is one clear demonstration of how this happens.            

KAHNEMAN:  Are you already doing that research, connecting moral judgment to chemistry?

CROCKETT:  Yes. One of the first entry points into the moral psychology literature during my PhD was a study where we gave people different kinds of psychoactive drugs. We gave people an antidepressant drug that affected their serotonin, or an ADHD drug that affected their noradrenaline, and then we looked at how these drugs affected the way people made moral judgments. In that literature, you can compare two different schools of moral thought for how people ought to make moral decisions.

The entire transcript, video, and audio are here.

Saturday, July 9, 2016

Facebook Offers Tools for Those Who Fear a Friend May Be Suicidal

By Mike Isaac
The New York Times
June 14, 2016

Here is an excerpt:

With more than 1.65 billion members worldwide posting regularly about their behavior, Facebook is planning to take a more direct role in stopping suicide. On Tuesday, in the biggest step by a major technology company to incorporate suicide prevention tools into its platform, the social network introduced mechanisms and processes to make it easier for people to help friends who post messages about suicide or self-harm. With the new features, people can flag friends’ posts that they deem suicidal; the posts will be reviewed by a team at the social network that will then provide language to communicate with the person who is at risk, as well as information on suicide prevention.

The timing coincides with a surge in suicide rates in the United States to a 30-year high. The increase has been particularly steep among women and middle-aged Americans, reflecting widespread desperation. Last year, President Obama declared a World Suicide Prevention Day in September, calling on people to recognize mental health issues early and to reach out to support one another.

Friday, July 8, 2016

Could a device tell your brain to make healthy choices?

by Yasmin Anwar
Futurity
Originally posted June 13, 2016

New research suggests it’s possible to detect when our brain is making a decision and nudge it to make the healthier choice.

In recording moment-to-moment deliberations by macaque monkeys over which option is likely to yield the most fruit juice, scientists have captured the dynamics of decision-making down to millisecond changes in neurons in the brain’s orbitofrontal cortex.

The article is here.

Thursday, July 7, 2016

The Mistrust of Science

By Atul Gawande
The New Yorker
Originally posted June 10, 2016

Here are two excerpts:

The scientific orientation has proved immensely powerful. It has allowed us to nearly double our lifespan during the past century, to increase our global abundance, and to deepen our understanding of the nature of the universe. Yet scientific knowledge is not necessarily trusted. Partly, that’s because it is incomplete. But even where the knowledge provided by science is overwhelming, people often resist it—sometimes outright deny it. Many people continue to believe, for instance, despite massive evidence to the contrary, that childhood vaccines cause autism (they do not); that people are safer owning a gun (they are not); that genetically modified crops are harmful (on balance, they have been beneficial); that climate change is not happening (it is).

(cut)

People are prone to resist scientific claims when they clash with intuitive beliefs. They don’t see measles or mumps around anymore. They do see children with autism. And they see a mom who says, “My child was perfectly fine until he got a vaccine and became autistic.”

Now, you can tell them that correlation is not causation. You can say that children get a vaccine every two to three months for the first couple years of their life, so the onset of any illness is bound to follow vaccination for many kids. You can say that the science shows no connection. But once an idea has got embedded and become widespread, it becomes very difficult to dig it out of people’s brains—especially when they do not trust scientific authorities. And we are experiencing a significant decline in trust in scientific authorities.

The article is here.

Secrets and lies: Faked data and lack of transparency plague global drug manufacturing

By Kelly Crowe
CBC News 
Originally posted: June 10, 2016

Here is an excerpt:

In another case, when the FDA responded to complaints from U.S. manufacturers about impurities in raw ingredients from a Chinese company and asked to see the data, inspectors discovered it had been deleted and the audit trail disabled.

Two companies on Health Canada's watch list have been caught falsifying the source of their active pharmaceutical ingredient. Both claimed to have made the raw material, but actually purchased it from somewhere else.

There's tragic proof that data integrity matters. In 2008, 19 people in the U.S. died and hundreds more were sickened by a contaminated blood thinner made from a raw material the FDA believes had been tampered with at its source in China.

The article is here.

Wednesday, July 6, 2016

Intrinsic honesty and the prevalence of rule violations across societies

Simon Gächter & Jonathan F. Schulz
Nature 531, 496–499 (24 March 2016)
doi:10.1038/nature17160

Abstract

Deception is common in nature and humans are no exception. Modern societies have created institutions to control cheating, but many situations remain where only intrinsic honesty keeps people from cheating and violating rules. Psychological, sociological and economic theories suggest causal pathways to explain how the prevalence of rule violations in people’s social environment, such as corruption, tax evasion or political fraud, can compromise individual intrinsic honesty. Here we present cross-societal experiments from 23 countries around the world that demonstrate a robust link between the prevalence of rule violations and intrinsic honesty. We developed an index of the ‘prevalence of rule violations’ (PRV) based on country-level data from the year 2003 of corruption, tax evasion and fraudulent politics. We measured intrinsic honesty in an anonymous die-rolling experiment5. We conducted the experiments with 2,568 young participants (students) who, due to their young age in 2003, could not have influenced PRV in 2003. We find individual intrinsic honesty is stronger in the subject pools of low PRV countries than those of high PRV countries. The details of lying patterns support psychological theories of honesty. The results are consistent with theories of the cultural co-evolution of institutions and values, and show that weak institutions and cultural legacies that generate rule violations not only have direct adverse economic consequences, but might also impair individual intrinsic honesty that is crucial for the smooth functioning of society.

The article is here.

Tuesday, July 5, 2016

How scientists fool themselves – and how they can stop

Regina Nuzzo
Nature 526, 182–185 (08 October 2015)
doi:10.1038/526182a

Here is an excerpt:

This is the big problem in science that no one is talking about: even an honest person is a master of self-deception. Our brains evolved long ago on the African savannah, where jumping to plausible conclusions about the location of ripe fruit or the presence of a predator was a matter of survival. But a smart strategy for evading lions does not necessarily translate well to a modern laboratory, where tenure may be riding on the analysis of terabytes of multidimensional data. In today's environment, our talent for jumping to conclusions makes it all too easy to find false patterns in randomness, to ignore alternative explanations for a result or to accept 'reasonable' outcomes without question — that is, to ceaselessly lead ourselves astray without realizing it.

Failure to understand our own biases has helped to create a crisis of confidence about the reproducibility of published results, says statistician John Ioannidis, co-director of the Meta-Research Innovation Center at Stanford University in Palo Alto, California. The issue goes well beyond cases of fraud. Earlier this year, a large project that attempted to replicate 100 psychology studies managed to reproduce only slightly more than one-third. In 2012, researchers at biotechnology firm Amgen in Thousand Oaks, California, reported that they could replicate only 6 out of 53 landmark studies in oncology and haematology. And in 2009, Ioannidis and his colleagues described how they had been able to fully reproduce only 2 out of 18 microarray-based gene-expression studies.

The article is here.

Editor's note: These biases also apply to clinicians who use research or their own theories about how and why psychotherapy works.

Commentary: The dangerous growth of pseudophysics

Sadri Hassani
Physics Today
Originally posted May 2016, page 10

Here is an excerpt:

Among the factors contributing to the rapid growth of pseudoscience are various misrepresentations of modern physics and especially of QT. Some prominent physicists of the past century have presented philosophical outlooks that, as mystical and antiscientific as they may be, have become wrongfully associated with modern physics. And the public’s scant knowledge about the underlying principles of science, combined with the compelling power of science exhibited in smartphones, GPS, and confirmation of the Higgs boson and gravitational waves, turns those philosophical misrepresentations into a forceful engine for promoting such nonsense as quantum healing, quantum touch therapy, and other “quantum” commodities sold in the crowded information marketplace.

The article is here.

Monday, July 4, 2016

Experts worry high military suicide rates are 'new normal'

by Gregg Zoroya
USA Today
Originally published June 12, 2016

Seven years after the rate of suicides by soldiers more than doubled, the Army has failed to reduce the tragic pace of self-destruction, and experts worry the problem is a "new normal."

"It's very clear that nothing that the Army has done has resulted in the suicide rates coming down," said Carl Castro, a psychologist who retired from the Army in 2013, when he was a colonel overseeing behavioral health research programs.

The sharp rise in the Army's suicide rate from 2004 through 2009 coincided with unusually heavy demands on the nation's all-volunteer military, as hundreds of thousands of troops, most of them in the Army, deployed to Iraq and Afghanistan. The vast majority have since come home, but suicide rates remain stubbornly high.

The Army's suicide rate for active-duty soldiers averaged nearly 11-per-100-000 from Sept. 11, 2001, until shortly after the Iraq invasion in 2004. It more than doubled over the next five years, and, with the exception of a spike in 2012, has remained largely constant at 24-to-25-per-100,000, roughly 20% to 25% higher than a civilian population of the same age and gender makeup as the military.

The article is here.

Newly released CIA files expose grim details of agency interrogation program

by Greg Miller, Karen Deyoung And Julie Tate
The Washington Post
Originally posted June 14, 2016

The CIA released dozens of previously classified documents Tuesday that expose disturbing new details of the agency’s treatment of terrorism suspects after the Sept. 11, 2001, attacks, including one who died in Afghanistan in 2002 after being doused with water and chained to a concrete floor as temperatures plunged below freezing.

The files include granular descriptions of the inner workings of the CIA’s “black site” prisons, messages sent to CIA headquarters from field officers who expressed deep misgivings with how detainees were being treated and secret memos raising objections to the roles played by doctors and psychologists in the administration of treatment later condemned as torture.

But the collection also includes documents that were drafted by senior CIA officials to defend the interrogation program as it came under growing scrutiny, including a lengthy memo asserting that the use of often brutal methods had saved thousands of civilian lives.

The 50 documents included in the release were all drawn from records turned over to the Senate Intelligence Committee as part of its multi-year probe of the interrogation program.

The article is here.

Sunday, July 3, 2016

Disgust made us human

By Kathleen McAuliffe
Aeon
Originally posted June 6, 2016

Here are two excerpts:

If you’re skeptical that parasites have any bearing on your principles, consider this: our values actually change when there are infectious agents in our vicinity. In an experiment by Simone Schnall, a social psychologist at the University of Cambridge, students were asked to ponder morally questionable behaviour such as lying on a résumé, not returning a stolen wallet or, far more fraught, turning to cannibalism to survive a plane crash. Subjects seated at desks with food stains and chewed-up pens typically judged these transgressions as more egregious than students at spotless desks. Numerous other studies – using, unbeknown to the participants, imaginative disgust elicitors such as fart spray or the scent of vomit – have reported similar findings. Premarital sex, bribery, pornography, unethical journalism, marriage between first cousins: all become more reprehensible when subjects were disgusted.

(cut)

From this point in human social development, it took a bit more rejiggering of the same circuitry to bring our species to a momentous place: we became disgusted by people who behaved immorally. This development, Curtis argues, is central to understanding how we became an extraordinarily social and cooperative species, capable of putting our minds together to solve problems, create new inventions, exploit natural resources with unprecedented efficiency and, ultimately, lay the foundations for civilisation.

The article is here.

Editor's note: Please, if you can make it past the dog rape example in the beginning of the article, it is a thought provoking article.  Go to the "comments" section to see what readers have to say about that example.

Saturday, July 2, 2016

We need morality to beat this hurricane of anger

Jonathan Sacks
The Telegraph
Originally published July 1, 2016

Here is an excerpt:

Morality has been outsourced to the market. The market gives us choices, and morality has been reduced to a set of choices in which right or wrong have no meaning beyond the satisfaction or frustration of desire. We find it increasingly hard to understand why there might be things we want to do and can afford to do, that we should not do because they are dishonourable or disloyal or demeaning: in a word, unethical. Too many people in positions of public trust have come to the conclusion that if you can get away with it, you would be a fool not to do it. That is how elites betray the public they were supposed to serve. When that happens, trust collapses and a civilization begins to decay and die.

Meanwhile the liberal democratic state abolished national identity in favour of multiculturalism. The effect was to turn society from a home into a hotel. In a hotel you pay the price, get a room, and are free to do what you like so long as you do not disturb the other guests. But a hotel is not a home. It doesn’t generate identity, loyalty or a sense of belonging. Multiculturalism was supposed to make Europe more tolerant. Its effect has been precisely the opposite, leading to segregation, not integration.

Selfishness Is Learned

By Matthew Hutson
Nautilus
Originally posted June 9, 2016

Many people cheat on taxes—no mystery there. But many people don’t, even if they wouldn’t be caught—now, that’s weird. Or is it? Psychologists are deeply perplexed by human moral behavior, because it often doesn’t seem to make any logical sense. You might think that we should just be grateful for it. But if we could understand these seemingly irrational acts, perhaps we could encourage more of them.

It’s not as though people haven’t been trying to fathom our moral instincts; it is one of the oldest concerns of philosophy and theology. But what distinguishes the project today is the sheer variety of academic disciplines it brings together: not just moral philosophy and psychology, but also biology, economics, mathematics, and computer science. They do not merely contemplate the rationale for moral beliefs, but study how morality operates in the real world, or fails to. David Rand of Yale University epitomizes the breadth of this science, ranging from abstract equations to large-scale societal interventions. “I’m a weird person,” he says, “who has a foot in each world, of model-making and of actual experiments and psychological theory building.”

The article is here.

Editor's note: There is a nice review of relevant research in this article.

Friday, July 1, 2016

Predicting Suicide is not Reliable, according to recent study

Matthew Large , M. Kaneson, N. Myles, H. Myles, P. Gunaratne, C. Ryan
PLOS One
Published: June 10, 2016
http://dx.doi.org/10.1371/journal.pone.0156322

Discussion

The pooled estimate from a large and representative body of research conducted over 40 years suggests a statistically strong association between high-risk strata and completed suicide. However the meta-analysis of the sensitivity of suicide risk categorization found that about half of all suicides are likely to occur in lower-risk groups and the meta-analysis of PPV suggests that 95% of high-risk patients will not suicide. Importantly, the pooled odds ratio (and the estimates of the sensitivity and PPV) and any assessment of the overall strength of risk assessment should be interpreted very cautiously in the context of several limitations documented below.

With respect to our first hypothesis, the statistical estimates of between study heterogeneity and the distribution of the outlying, quartile and median effect sizes values suggests that the statistical strength of suicide risk assessment cannot be considered to be consistent between studies, potentially limiting the generalizability of the pooled estimate.

With respect to our second hypothesis we found no evidence that the statistical strength of suicide risk assessment has improved over time.

The research is here.

Predictive genetic testing for neurodegenerative conditions: how should conflicting interests within families be managed?

Zornitza Stark, Jane Wallace, Lynn Gillam, Matthew Burgess, Martin B Delatycki
J Med Ethics doi:10.1136/medethics-2016-103400

Abstract

Predictive genetic testing for a neurodegenerative condition in one individual in a family may have implications for other family members, in that it can reveal their genetic status. Herein a complex clinical case is explored where the testing wish of one family member was in direct conflict to that of another. The son of a person at 50% risk of an autosomal dominant neurodegenerative condition requested testing to reveal his genetic status. The main reason for the request was if he had the familial mutation, he and his partner planned to utilise preimplantation genetic diagnosis to prevent his offspring having the condition. His at-risk parent was clear that if they found out they had the mutation, they would commit suicide. We assess the potential benefits and harms from acceding to or denying such a request and present an approach to balancing competing rights of individuals within families at risk of late-onset genetic conditions, where family members have irreconcilable differences with respect to predictive testing. We argue that while it may not be possible to completely avoid harm in these situations, it is important to consider the magnitude of risks, and make every effort to limit the potential for adverse outcomes.

The article is here.