Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, March 31, 2017

Dishonesty gets easier on the brain the more you do it

Neil Garrett
Aeon
Originally published March 7, 2017

Here are two excerpts:

These two ideas – the role of arousal on our willingness to cheat, and neural adaptation – are connected because the brain does not just adapt to things such as sounds and smells. The brain also adapts to emotions. For example, when presented with aversive pictures (eg, threatening faces) or receiving something unpleasant (eg, an electric shock), the brain will initially generate strong responses in regions associated with emotional processing. But when these experiences are repeated over time, the emotional responses diminish.

(cut)

There have also been a number of behavioural interventions proposed to curb unethical behaviour. These include using cues that emphasise morality and encouraging self-engagement. We don’t currently know the underlying neural mechanisms that can account for the positive behavioural changes these interventions drive. But an intriguing possibility is that they operate in part by shifting up our emotional reaction to situations in which dishonesty is an option, in turn helping us to resist the temptation to which we have become less resistant over time.

The article is here.

Signaling Emotion and Reason in Cooperation

Levine, Emma Edelman and Barasch, Alixandra and Rand, David G. and Berman, Jonathan Z. and Small, Deborah A. (February 23, 2017).

Abstract

We explore the signal value of emotion and reason in human cooperation. Across four experiments utilizing dyadic prisoner dilemma games, we establish three central results. First, individuals believe that a reliance on emotion signals that one will cooperate more so than a reliance on reason. Second, these beliefs are generally accurate — those who act based on emotion are more likely to cooperate than those who act based on reason. Third, individuals’ behavioral responses towards signals of emotion and reason depends on their own decision mode: those who rely on emotion tend to conditionally cooperate (that is, cooperate only when they believe that their partner has cooperated), whereas those who rely on reason tend to defect regardless of their partner’s signal. These findings shed light on how different decision processes, and lay theories about decision processes, facilitate and impede cooperation.

Available at SSRN: https://ssrn.com/abstract=2922765

Editor's note: This research has implications for developing the therapeutic relationship.

Thursday, March 30, 2017

Risk considerations for suicidal physicians

Doug Brunk
Clinical Psychiatry News
Publish date: February 27, 2017

Here are two excerpts:

According to the American Foundation for Suicide Prevention, 300-400 physicians take their own lives every year, the equivalent of two to three medical school classes. “That’s a doctor a day we lose to suicide,” said Dr. Myers, a professor of clinical psychiatry at State University of New York, Brooklyn, who specializes in physician health. Compared with the general population, the suicide rate ratio is 2.27 among female physicians and 1.41 among male physicians (Am J Psychiatry. 2004;161[12]:2295-2302), and an estimated 85%-90% of those who carry out a suicide have a psychiatric illness such as major depressive disorder, bipolar disorder, alcohol use and substance use disorder, and borderline personality disorder. Other triggers common to physicians, Dr. Myers said, include other kinds of personality disorders, burnout, untreated anxiety disorders, substance/medication-induced depressive disorder (especially in clinicians who have been self-medicating), and posttraumatic stress disorder.

(cut)

Inadequate treatment can occur for physician patients because of transference and countertransference dynamics “that muddle the treatment dyad,” Dr. Myers added. “We must be mindful of the many issues that are going on when we treat our own.”

Association Between Physician Burnout and Identification With Medicine as a Calling

Andrew J. Jager, MA, Michael A. Tutty, PhD, Audiey C. Kao, PhD Audiey C. Kao
Mayo Clinic Proceedings
DOI: http://dx.doi.org/10.1016/j.mayocp.2016.11.012

Objective

To evaluate the association between degree of professional burnout and physicians' sense of calling.

Participants and Methods

US physicians across all specialties were surveyed between October 24, 2014, and May 29, 2015. Professional burnout was assessed using a validated single-item measure. Sense of calling, defined as committing one's life to personally meaningful work that serves a prosocial purpose, was assessed using 6 validated true-false items. Associations between burnout and identification with calling items were assessed using multivariable logistic regressions.

Results

A total of 2263 physicians completed surveys (63.1% response rate). Among respondents, 28.5% (n=639) reported experiencing some degree of burnout. Compared with physicians who reported no burnout symptoms, those who were completely burned out had lower odds of finding their work rewarding (odds ratio [OR], 0.05; 95% CI, 0.02-0.10; P<.001), seeing their work as one of the most important things in their lives (OR, 0.38; 95% CI, 0.21-0.69; P<.001), or thinking their work makes the world a better place (OR, 0.38; 95% CI, 0.17-0.85; P=.02). Burnout was also associated with lower odds of enjoying talking about their work to others (OR, 0.23; 95% CI, 0.13-0.41; P<.001), choosing their work life again (OR, 0.11; 95% CI, 0.06-0.20; P<.001), or continuing with their current work even if they were no longer paid if they were financially stable (OR, 0.30; 95% CI, 0.15-0.59; P<.001).

Conclusion

Physicians who experience more burnout are less likely to identify with medicine as a calling. Erosion of the sense that medicine is a calling may have adverse consequences for physicians as well as those for whom they care.

Wednesday, March 29, 2017

Neuroethics and the Ethical Parity Principle

DeMarco, J.P. & Ford, P.J.
Neuroethics (2014) 7: 317.
doi:10.1007/s12152-014-9211-6

Abstract

Neil Levy offers the most prominent moral principles that are specifically and exclusively designed to apply to neuroethics. His two closely related principles, labeled as versions of the ethical parity principle (EPP), are intended to resolve moral concerns about neurological modification and enhancement [1]. Though EPP is appealing and potentially illuminating, we reject the first version and substantially modify the second. Since his first principle, called EPP (strong), is dependent on the contention that the mind literally extends into external props such as paper notebooks and electronic devices, we begin with an examination of the extended mind hypothesis (EMH) and its use in Levy’s EPP (strong). We argue against reliance on EMH as support for EPP (strong). We turn to his second principle, EPP (weak), which is not dependent on EMH but is tied to the acceptable claim that the mind is embedded in, because dependent on, external props. As a result of our critique of EPP (weak), we develop a modified version of EPP (weak), which we argue is more acceptable than Levy’s principle. Finally, we evaluate the applicability of our version of EPP (weak).

The article is here.

Philosopher Daniel Dennett on AI, robots and religion

John Thornhill
Financial Times
Originally published March 3, 2017

Here are two excerpts:

AI experts tend to draw a sharp distinction between machine intelligence and human consciousness. Dennett is not so sure. Where many worry that robots are becoming too human, he argues humans have always been largely robotic. Our consciousness is the product of the interactions of billions of neurons that are all, as he puts it, “sorta robots”.

“I’ve been arguing for years that, yes, in principle it’s possible for human consciousness to be realised in a machine. After all, that’s what we are,” he says. “We’re robots made of robots made of robots. We’re incredibly complex, trillions of moving parts. But they’re all non-miraculous robotic parts.”

(cut)

The term “inversion of reason”, he says, came from one of Darwin’s 19th-century critics, outraged at the biologist’s counterintuitive thinking. Rather than accepting that an absolute intelligence was responsible for the creation of species, the critic denounced Darwin for believing that absolute ignorance had accomplished all the marvels of creative skill. “And of course that’s right. That’s exactly what Darwin was saying. Darwin says the nightingale is created by a process with no intelligence at all. So that’s the first inversion of reasoning.”

The article is here.

Tuesday, March 28, 2017

Why We Believe Obvious Untruths

Philip Fernbach & Steven Sloman
The New York Times
Originally published March 3, 2017

'How can so many people believe things that are demonstrably false? The question has taken on new urgency as the Trump administration propagates falsehoods about voter fraud, climate change and crime statistics that large swaths of the population have bought into. But collective delusion is not new, nor is it the sole province of the political right. Plenty of liberals believe, counter to scientific consensus, that G.M.O.s are poisonous, and that vaccines cause autism.

The situation is vexing because it seems so easy to solve. The truth is obvious if you bother to look for it, right? This line of thinking leads to explanations of the hoodwinked masses that amount to little more than name calling: “Those people are foolish” or “Those people are monsters.”

Such accounts may make us feel good about ourselves, but they are misguided and simplistic: They reflect a misunderstanding of knowledge that focuses too narrowly on what goes on between our ears. Here is the humbler truth: On their own, individuals are not well equipped to separate fact from fiction, and they never will be. Ignorance is our natural state; it is a product of the way the mind works.

What really sets human beings apart is not our individual mental capacity. The secret to our success is our ability to jointly pursue complex goals by dividing cognitive labor. Hunting, trade, agriculture, manufacturing — all of our world-altering innovations — were made possible by this ability. Chimpanzees can surpass young children on numerical and spatial reasoning tasks, but they cannot come close on tasks that require collaborating with another individual to achieve a goal. Each of us knows only a little bit, but together we can achieve remarkable feats.

Facebook Is Using Artificial Intelligence To Help Prevent Suicide

Alex Kantrowitz
BuzzFeed
Originally published March 1, 2017

Facebook is bringing its artificial intelligence expertise to bear on suicide prevention, an issue that’s been top of mind for CEO Mark Zuckerberg following a series of suicides livestreamed via the company’s Facebook Live video service in recent months.

“It’s hard to be running this company and feel like, okay, well, we didn’t do anything because no one reported it to us,” Zuckerberg told BuzzFeed News in an interview last month. “You want to go build the technology that enables the friends and people in the community to go reach out and help in examples like that.”

Today, Facebook is introducing an important piece of that technology — a suicide-prevention feature that uses AI to identify posts indicating suicidal or harmful thoughts. The AI scans the posts and their associated comments, compares them to others that merited intervention, and, in some cases, passes them along to its community team for review. The company plans to proactively reach out to users it believes are at risk, showing them a screen with suicide-prevention resources including options to contact a helpline or reach out to a friend.

The article is here.

Monday, March 27, 2017

Healthcare Data Breaches Up 40% Since 2015

Alexandria Wilson Pecci
MedPage Today
Originally posted February 26, 2017

Here is an excerpt:

Broken down by industry, hacking was the most common data breach source for the healthcare sector, according to data provided to HealthLeaders Media by the Identity Theft Resource Center. Physical theft was the biggest breach category for healthcare in 2015 and 2014.

Insider theft and employee error/negligence tied for the second most common data breach sources in 2016 in the health industry. In addition, insider theft was a bigger problem in the healthcare sector than in other industries, and has been for the past five years.

Insider theft is alleged to have been at play in the Jackson Health System incident. Former employee Evelina Sophia Reid was charged in a fourteen-count indictment with conspiracy to commit access device fraud, possessing fifteen or more unauthorized access devices, aggravated identity theft, and computer fraud, the Department of Justice said. Prosecutors say that her co-conspirators used the stolen information to file fraudulent tax returns in the patients' names.

The article is here.

US Researchers Found Guilty of Misconduct Collectively Awarded $101 Million

Joshua A. Krisch
The Scientist
February 27, 2017

Researchers found guilty of scientific misconduct by the US Department of Health and Human Services (HHS) went on to collectively receive $101 million from the National Institutes of Health (NIH), according to a study published this month (February 1) in the Journal of Empirical Research on Human Research Ethics. The authors also found that 47.2 percent of the researchers found guilty of misconduct they examined continue to publish studies.

The article is here.

The research is here.

Sunday, March 26, 2017

Moral Enhancement Using Non-invasive Brain Stimulation

R. Ryan Darby and Alvaro Pascual-Leone
Front. Hum. Neurosci., 22 February 2017
https://doi.org/10.3389/fnhum.2017.00077

Biomedical enhancement refers to the use of biomedical interventions to improve capacities beyond normal, rather than to treat deficiencies due to diseases. Enhancement can target physical or cognitive capacities, but also complex human behaviors such as morality. However, the complexity of normal moral behavior makes it unlikely that morality is a single capacity that can be deficient or enhanced. Instead, our central hypothesis will be that moral behavior results from multiple, interacting cognitive-affective networks in the brain. First, we will test this hypothesis by reviewing evidence for modulation of moral behavior using non-invasive brain stimulation. Next, we will discuss how this evidence affects ethical issues related to the use of moral enhancement. We end with the conclusion that while brain stimulation has the potential to alter moral behavior, such alteration is unlikely to improve moral behavior in all situations, and may even lead to less morally desirable behavior in some instances.

The article is here.

Saturday, March 25, 2017

White House Ethics Loophole for Ivanka 'Doesn't Work,' Say Watchdogs

Nika Knight
Common Dreams
Originally posted on March 24, 2017

Here are two excerpts:

The ethics advocates express "deep concern about the highly unusual and inappropriate arrangement that is being proposed for Ivanka Trump, the President's daughter, to play a formalized role in the White House without being required to comply with the ethics and disclosure requirements that apply to White House employees," arguing that the "arrangement appears designed to allow Ms. Trump to avoid the ethics, conflict-of-interest, and other rules that apply to White House employees."

(cut)

"The basic problem in the proposed relationship is that it appears to be trying to create a middle space that does not exist," the letter explains. "On the one hand Ms. Trump's position will provide her with the privileges and opportunities for public service that attach to being a White House employee. On the other hand, she remains the owner of a private business who is free from the ethics and conflicts rules that apply to White House employees."

The article is here.

Will Democracy Survive Big Data and Artificial Intelligence?

Dirk Helbing, Bruno S. Frey, Gerd Gigerenzer,  and others
Scientific American
Originally posted February 25, 2017

Here is an excerpt:

One thing is clear: the way in which we organize the economy and society will change fundamentally. We are experiencing the largest transformation since the end of the Second World War; after the automation of production and the creation of self-driving cars the automation of society is next. With this, society is at a crossroads, which promises great opportunities, but also considerable risks. If we take the wrong decisions it could threaten our greatest historical achievements.

(cut)

These technologies are also becoming increasingly popular in the world of politics. Under the label of “nudging,” and on massive scale, governments are trying to steer citizens towards healthier or more environmentally friendly behaviour by means of a "nudge"—a modern form of paternalism. The new, caring government is not only interested in what we do, but also wants to make sure that we do the things that it considers to be right. The magic phrase is "big nudging", which is the combination of big data with nudging. To many, this appears to be a sort of digital scepter that allows one to govern the masses efficiently, without having to involve citizens in democratic processes. Could this overcome vested interests and optimize the course of the world? If so, then citizens could be governed by a data-empowered “wise king”, who would be able to produce desired economic and social outcomes almost as if with a digital magic wand.

The article is here.

Friday, March 24, 2017

A cleansing fire: Moral outrage alleviates guilt and buffers threats to one’s moral identity

Rothschild, Z.K. & Keefer, L.A.
Motiv Emot (2017). doi:10.1007/s11031-017-9601-2

Abstract

Why do people express moral outrage? While this sentiment often stems from a perceived violation of some moral principle, we test the counter-intuitive possibility that moral outrage at third-party transgressions is sometimes a means of reducing guilt over one’s own moral failings and restoring a moral identity. We tested this guilt-driven account of outrage in five studies examining outrage at corporate labor exploitation and environmental destruction. Study 1 showed that personal guilt uniquely predicted moral outrage at corporate harm-doing and support for retributive punishment. Ingroup (vs. outgroup) wrongdoing elicited outrage at corporations through increased guilt, while the opportunity to express outrage reduced guilt (Study 2) and restored perceived personal morality (Study 3). Study 4 tested whether effects were due merely to downward social comparison and Study 5 showed that guilt-driven outrage was attenuated by an affirmation of moral identity in an unrelated context.

The article is here.

The Privacy Delusions Of Genetic Testing

Peter Pitts
Forbes
Originally posted February 15, 2017

Here is an excerpt:

The problem starts with the Health Insurance Portability and Accountability Act (HIPAA), a 1996 federal law that allows medical companies to share and sell patient data if it has been "anonymized," or scrubbed of any obvious identifying characteristics.

The Portability Act was passed when genetic testing was just a distant dream on the horizon of personalized medicine. But today, that loophole has proven to be a cash cow. For instance, 23andMe has sold access to its database to at least 13 outside pharmaceutical firms. One buyer, Genentech, ponied up a cool $10 million for the genetic profiles of people suffering from Parkinson's. AncestryDNA, another popular personal genetics company, recently announced a lucrative data-sharing partnership with the biotech company Calico.

Thursday, March 23, 2017

Ousted national security adviser didn't sign ethics pledge

By Julie Bykowicz
Associate Press
Originally posted March 22, 2017

President Donald Trump's former national security adviser Michael Flynn did not sign a mandatory ethics pledge ahead of his forced resignation in February, raising questions about the White House's commitment to the lobbying and ethics rules it imposed as part of the president's promise to "drain the swamp."

Flynn "didn't have the opportunity to sign it," said Price Floyd, a spokesman for the retired Army general. "But he is going to abide by the pledge" and has not engaged in any lobbying work since leaving the White House that would have violated the pledge, Floyd said.

Trump signed an executive order on Jan. 28 prohibiting political appointees from lobbying the government in any way for five years after serving in his administration. That same order instituted a lifetime ban on outgoing officials representing foreign governments.

The article is here.

Why we must teach morality to robots: Podcast

Daniel Glaser
The Guardian
Originally published February 27, 2017

Every week comes a new warning that robots are taking over our jobs. People have become troubled by the question of how robots will learn ethics, if they do take over our work and our planet.

As early on as the 1960s Isaac Asimov came up with the ‘Three Laws of Robotics’ outlining moral rules they should abide by. More recently there has been official guidance from the British Standards Institute advising designers how to create ethical robots, which is meant to avoid them taking over the world.

From a neuroscientist’s perspective, they should learn more from human development. We teach children morality before algebra. When they’re able to behave well in a social situation, we teach them language skills and more complex reasoning. It needs to happen this way round. Even the most sophisticated bomb-sniffing dog is taught to sit first.

If we’re interested in really making robots think more like we do, we can’t retrofit morality and ethics. We need to focus on that first, build it into their core, and then teach them to drive.

Wednesday, March 22, 2017

The Case of Dr. Oz: Ethics, Evidence, and Does Professional Self-Regulation Work?

Jon C. Tilburt, Megan Allyse, and Frederic W. Hafferty
AMA Journal of Ethics. February 2017, Volume 19, Number 2: 199-206.

Abstract

Dr. Mehmet Oz is widely known not just as a successful media personality donning the title “America’s Doctor®,” but, we suggest, also as a physician visibly out of step with his profession. A recent, unsuccessful attempt to censure Dr. Oz raises the issue of whether the medical profession can effectively self-regulate at all. It also raises concern that the medical profession’s self-regulation might be selectively activated, perhaps only when the subject of professional censure has achieved a level of public visibility. We argue here that the medical profession must look at itself with a healthy dose of self-doubt about whether it has sufficient knowledge of or handle on the less visible Dr. “Ozes” quietly operating under the profession’s presumptive endorsement.

The article is here.

Act versus Impact: Conservatives and Liberals Exhibit Different Structural Emphases in Moral Judgment

Ivar R. Hannikainen, Ryan M. Miller, & Fiery A. Cushman
Ratio: Special Issue on ‘Experimental Philosophy as Applied Philosophy’
Forthcoming

Conservatives and liberals disagree sharply on matters of morality and public policy. We propose a
novel account of the psychological basis of these differences. Specifically, we find that conservatives
tend to emphasize the intrinsic value of actions during moral judgment, in part by mentally simulating themselves performing those actions, while liberals instead emphasize the value of the expected outcomes of the action. We then demonstrate that a structural emphasis on actions is linked to the condemnation of victimless crimes, a distinctive feature of conservative morality. Next, we find that the conservative and liberal structural approaches to moral judgment are associated with their corresponding patterns of reliance on distinct moral foundations. In addition, the structural approach uniquely predicts that conservatives will be more opposed to harm in circumstances like the wellknown trolley problem, a result which we replicate. Finally, we show that the structural approaches of conservatives and liberals are partly linked to underlying cognitive styles (intuitive versus deliberative).  Collectively, these findings forge a link between two important yet previously independent lines of research in political psychology: cognitive style and moral foundations theory.

The article is here.

Tuesday, March 21, 2017

Why can 12-year-olds still get married in the United States?

Fraidy Reiss
The Washington Post
Originally published February 10, 2017

Here is an excerpt:

Unchained At Last, a nonprofit I founded to help women resist or escape forced marriage in the United States, spent the past year collecting marriage license data from 2000 to 2010, the most recent year for which most states were able to provide information. We learned that in 38 states, more than 167,000 children — almost all of them girls, some as young 12 — were married during that period, mostly to men 18 or older. Twelve states and the District of Columbia were unable to provide information on how many children had married there in that decade. Based on the correlation we identified between state population and child marriage, we estimated that the total number of children wed in America between 2000 and 2010 was nearly 248,000.

Despite these alarming numbers, and despite the documented consequences of early marriages, including negative effects on health and education and an increased likelihood of domestic violence, some state lawmakers have resisted passing legislation to end child marriage — because they wrongly fear that such measures might unlawfully stifle religious freedom or because they cling to the notion that marriage is the best solution for a teen pregnancy.

The article is here.

Ethical concerns for telemental health therapy amidst governmental surveillance.

Samuel D. Lustgarten and Alexander J. Colbow
American Psychologist, Vol 72(2), Feb-Mar 2017, 159-170.

Abstract

Technology, infrastructure, governmental support, and interest in mental health accessibility have led to a burgeoning field of telemental health therapy (TMHT). Psychologists can now provide therapy via computers at great distances and little cost for parties involved. Growth of TMHT within the U.S. Department of Veterans Affairs and among psychologists surveyed by the American Psychological Association (APA) suggests optimism in this provision of services (Godleski, Darkins, & Peters, 2012; Jacobsen & Kohout, 2010). Despite these advances, psychologists using technology must keep abreast of potential limitations to privacy and confidentiality. However, no scholarly articles have appraised the ramifications of recent government surveillance disclosures (e.g., “The NSA Files”; Greenwald, 2013) and how they might affect TMHT usage within the field of psychology. This article reviews the current state of TMHT in psychology, APA’s guidelines, current governmental threats to client privacy, and other ethical ramifications that might result. Best practices for the field of psychology are proposed.

The article is here.

Monday, March 20, 2017

When Evidence Says No, But Doctors Say Yes

David Epstein
ProPublica
Originally published February 22, 2017

Here is an excerpt:

When you visit a doctor, you probably assume the treatment you receive is backed by evidence from medical research. Surely, the drug you’re prescribed or the surgery you’ll undergo wouldn’t be so common if it didn’t work, right?

For all the truly wondrous developments of modern medicine — imaging technologies that enable precision surgery, routine organ transplants, care that transforms premature infants into perfectly healthy kids, and remarkable chemotherapy treatments, to name a few — it is distressingly ordinary for patients to get treatments that research has shown are ineffective or even dangerous. Sometimes doctors simply haven’t kept up with the science. Other times doctors know the state of play perfectly well but continue to deliver these treatments because it’s profitable — or even because they’re popular and patients demand them. Some procedures are implemented based on studies that did not prove whether they really worked in the first place. Others were initially supported by evidence but then were contradicted by better evidence, and yet these procedures have remained the standards of care for years, or decades.

The article is here.

The Enforcement of Moral Boundaries Promotes Cooperation and Prosocial Behavior in Groups

Brent Simpson, Robb Willer & Ashley Harrell
Scientific Reports 7, Article number: 42844 (2017)

Abstract

The threat of free-riding makes the marshalling of cooperation from group members a fundamental challenge of social life. Where classical social science theory saw the enforcement of moral boundaries as a critical way by which group members regulate one another’s self-interest and build cooperation, moral judgments have most often been studied as processes internal to individuals. Here we investigate how the interpersonal expression of positive and negative moral judgments encourages cooperation in groups and prosocial behavior between group members. In a laboratory experiment, groups whose members could make moral judgments achieved greater cooperation than groups with no capacity to sanction, levels comparable to those of groups featuring costly material sanctions. In addition, members of moral judgment groups subsequently showed more interpersonal trust, trustworthiness, and generosity than all other groups. These findings extend prior work on peer enforcement, highlighting how the enforcement of moral boundaries offers an efficient solution to cooperation problems and promotes prosocial behavior between group members.

The article is here.

Sunday, March 19, 2017

Revamping the US Federal Common Rule: Modernizing Human Participant Research Regulations

James G. Hodge Jr. and Lawrence O. Gostin
JAMA. Published online February 22, 2017

On January 19, 2017, the Office for Human Research Protections (OHRP), Department of Health and Human Services, and 15 federal agencies published a final rule to modernize the Federal Policy for the Protection of Human Subjects (known as the “Common Rule”).1 Initially introduced more than a quarter century ago, the Common Rule predated modern scientific methods and findings, notably human genome research.

Research enterprises now encompass vast multicenter trials in both academia and the private sector. The volume, types, and availability of public/private data and biospecimens have increased exponentially. Federal agencies demanded more accountability, research investigators sought more flexibility, and human participants desired more control over research. Most rule changes become effective in 2018, giving institutions time for implementation.

The article is here.

Saturday, March 18, 2017

Budgets are moral documents, and Trump’s is a moral failure

Dylan Matthews
vox.com
Originally published March 16, 2017

The budget is a moral document.

It’s not clear where that phrase originates, but it’s become a staple of fiscal policy debates in DC, and for very good reason. Budgets lay out how a fifth of the national economy is going to be allocated. They make trade-offs between cancer treatment and jet fighters, scientific research and tax cuts, national parks and border fences. These are all decisions with profound moral implications. Budgets, when implemented, can lift millions out of poverty, or consign millions more to it. They can provide universal health insurance or take coverage away from those who have it. They can fuel wars or support peacekeeping.

What President Donald Trump released on Thursday is not a full budget. It doesn’t touch on taxes, or on entitlement programs like Social Security, Medicare, Medicaid, or food stamps. It concerns itself exclusively with the third of the budget that’s allocated through the annual appropriations process.

But it’s a moral document nonetheless. And the moral consequences of its implementation would be profound, and negative. The fact that it will not be implemented in full — that Congress is almost certain not to go along with many of its recommendations — in no way detracts from what it tells us about the administration’s priorities, and its ethics.

Let’s start with poverty.

The article is here.

Friday, March 17, 2017

Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations

BEC CREW
Science Alert
Originally published February 13, 2017

Here is an excerpt:

But when they used larger, more complex networks as the agents, the AI was far more willing to sabotage its opponent early to get the lion's share of virtual apples.

The researchers suggest that the more intelligent the agent, the more able it was to learn from its environment, allowing it to use some highly aggressive tactics to come out on top.

"This model ... shows that some aspects of human-like behaviour emerge as a product of the environment and learning," one of the team, Joel Z Leibo, told Matt Burgess at Wired.

"Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself."

DeepMind was then tasked with playing a second video game, called Wolfpack. This time, there were three AI agents - two of them played as wolves, and one as the prey.

The article is here.

Professional Liability for Forensic Activities: Liability Without a Treatment Relationship

Donna Vanderpool
Innov Clin Neurosci. 2016 Jul-Aug; 13(7-8): 41–44.

This ongoing column is dedicated to providing information to our readers on managing legal risks associated with medical practice. We invite questions from our readers. The answers are provided by PRMS, Inc. (www.prms.com), a manager of medical professional liability insurance programs with services that include risk management consultation, education and onsite risk management audits, and other resources to healthcare providers to help improve patient outcomes and reduce professional liability risk. The answers published in this column represent those of only one risk management consulting company. Other risk management consulting companies or insurance carriers may provide different advice, and readers should take this into consideration. The information in this column does not constitute legal advice. For legal advice, contact your personal attorney. Note: The information and recommendations in this article are applicable to physicians and other healthcare professionals so “clinician” is used to indicate all treatment team members.

Question:

In my mental health practice, I am doing more and more forensic activities, such as IMEs and expert testimony. Since I am not treating the evaluees, there should be no professional liability risk, right?

The answer and column is here.

Thursday, March 16, 2017

Mercedes-Benz’s Self-Driving Cars Would Choose Passenger Lives Over Bystanders

David Z. Morris
Fortune
Originally published Oct 15, 2016

In comments published last week by Car and Driver, Mercedes-Benz executive Christoph von Hugo said that the carmaker’s future autonomous cars will save the car’s driver and passengers, even if that means sacrificing the lives of pedestrians, in a situation where those are the only two options.

“If you know you can save at least one person, at least save that one,” von Hugo said at the Paris Motor Show. “Save the one in the car. If all you know for sure is that one death can be prevented, then that’s your first priority.”

This doesn't mean Mercedes' robotic cars will neglect the safety of bystanders. Von Hugo, who is the carmaker’s manager of driver assistance and safety systems, is addressing the so-called “Trolley Problem”—an ethical thought experiment that applies to human drivers just as much as artificial intelligences.

The article is here.

The big moral dilemma facing self-driving cars

Steven Overly
The Washington Post
Originally published February 27, 2017

How many people could self-driving cars kill before we would no longer tolerate them?

This once-hypothetical question is now taking on greater urgency, particularly among policymakers in Washington. The promise of autonomous vehicles is that they will make our roads safer and more efficient, but no technology is without its shortcomings and unintended consequences — in this instance, potentially fatal consequences.

“What if we can build a car that’s 10 times as safe, which means 3,500 people die on the roads each year. Would we accept that?” asks John Hanson, a spokesman for the Toyota Research Institute, which is developing the automaker’s self-driving technology.

“A lot of people say if, ‘I could save one life it would be worth it.’ But in a practical manner, though, we don’t think that would be acceptable,” Hanson added.

The article is here.

Wednesday, March 15, 2017

Researchers Are Divided as FDA Moves to Regulate Gene Editing

Paul Basken
The Chronicle of Higher Education
Originally published February 22, 2017

As U.S. regulators threaten broad new limits on the use of gene-editing technology, a Utah State University researcher now engineering goats to produce spider silk in their milk isn’t particularly worried.

"They’re just trying to modernize" rules to keep up with technology, the Utah professor, Randolph V. Lewis, said of the changes proposed by the U.S. Food and Drug Administration.

But over in Minnesota, a researcher working to create cows without horns — as a way of keeping the animals safe from one another — has a far different take.

"It’s a huge overreach" by the FDA that could stifle innovation, said Scott C. Fahrenkrug, an adjunct professor of functional genomics at the University of Minnesota at Twin Cities.

The FDA is responsible for ensuring the safety of food and drugs sold to Americans, and for years it has defined that oversight to require its approval when genes are added to animals whose products might be consumed. The change it proposed last month would expand that authority to cover new technologies such as CRISPR that enable gene-specific editing, potentially enabling changes not found in any known species.

To supporters, the FDA is simply trying to keep up with the science. To detractors, it’s a reach for authority so broad as to go beyond any reasonable definition of the FDA’s mandate.

The article is here.

Will the 'hard problem' of consciousness ever be solved?

David Papineau
The Question
Originally published February 21, 2017

Here is an excerpt:

The problem, if there is one, is that we find the reduction of consciousness to brain processes very hard to believe. The flaw lies in us, not in the neuroscientific account of consciousness. Despite all the scientific evidence, we can’t free ourselves of the old-fashioned dualist idea that conscious states inhabit some extra dualist realm outside the physical brain.

Just consider how the hard problem is normally posed. Why do brain states give rise to conscious feelings? That is already dualist talk. If one thing gives rise to another, they must be separate. Fire give rise to smoke, but H2O doesn’t give rise to water. So the very terminology presupposes that the conscious mind is different from the physical brain—which of course then makes us wonder why the brain generates this mysterious extra thing. On the other hand, if only we could properly accept that the mind just is the brain, then we would be no more inclined to ask why ‘they’ go together than we ask why H20 is water.

The article is here.

There is also a 5 minute video by Massimo Pigliucci on how the hard problem is a categorical mistake on this page.

Tuesday, March 14, 2017

AI will make life meaningless, Elon Musk warns

Zoe Nauman
The Sun
Originally published February 17, 2017

Here is an excerpt:

“I think some kind of universal income will be necessary.”

“The harder challenge is how do people then have meaning – because a lot of people derive their meaning from their employment.”

“If you are not needed, if there is not a need for your labor. What’s the meaning?”

“Do you have meaning, are you useless? That is a much harder problem to deal with.”

The article is here.

“I placed too much faith in underpowered studies:” Nobel Prize winner admits mistakes

Retraction Watch
Originally posted February 21, 2017

Although it’s the right thing to do, it’s never easy to admit error — particularly when you’re an extremely high-profile scientist whose work is being dissected publicly. So while it’s not a retraction, we thought this was worth noting: A Nobel Prize-winning researcher has admitted on a blog that he relied on weak studies in a chapter of his bestselling book.

The blog — by Ulrich Schimmack, Moritz Heene, and Kamini Kesavan — critiqued the citations included in a book by Daniel Kahneman, a psychologist whose research has illuminated our understanding of how humans form judgments and make decisions and earned him half of the 2002 Nobel Prize in Economics.

The article is here.

Monday, March 13, 2017

The Republican health care bill makes no sense

Ezra Klein
Vox.com
Originally posted March 9, 2017


Here is the conclusion from the video:

In reality, what I think we’re seeing here is Republicans trying desperately to come up with something that would allow them to repeal and replace Obamacare. This is a compromise of a compromise of a compromise aimed at fulfilling that promise. But “repeal and replace” is a political slogan, not a policy goal. This is a lot of political pain to endure for a bill that won’t improve many peoples’ lives, but will badly hurt millions.

Read further analysis here and stories of legislative history here.

Why Facts Don't Change Our Minds

Elizabeth Kolbert
The New Yorker
Originally published February 27, 2017

Here is an excerpt:

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The article is here.

Sunday, March 12, 2017

Ethics Watchdogs Want U.S. Attorney To Investigate Trump's Business Interests

Jim Zarolli
NPR.org
Originally published March 8, 2017

With Congress showing no signs of taking action, a group of ethics watchdogs is turning to U.S. Attorney Preet Bharara to look into whether President Trump's many business interests violate the Emoluments Clause of the U.S. Constitution.

"Published reports indicate that the Trump Organization and related Trump business entities have been receiving payments from foreign government sources which benefit President Trump through his ownership of the Trump Organization and related business entities," according to a letter sent to Bharara.

(cut)

The Emoluments Clause says that "no Person holding any Office of Profit or Trust under [the U.S. government], shall, without the Consent of the Congress, accept of any present, Emolument, Office, or Title, of any kind whatever, from any King, Prince, or foreign State."

The letter says "there is no question" the clause applies to Trump and that he is violating it, because of the Trump Organization's extensive business operations, many of them tied to foreign governments.

The article is here.

Is the Trump Administration Skirting Its Own Ethics Rules?

The hiring of three former lobbyists to work in the White House raises questions about the president’s executive order on ethics.

Justin Elliott
The Pacific Standard
Originally published on March 7, 2017

The Trump administration appears to be either ignoring or exempting top staffers from its own watered-down ethics rules.

As we have detailed, President Donald Trump in January issued an order weakening Obama-era ethics policies, allowing lobbyists to work at agencies they had sought to influence. The Trump order did limit what lobbyists could do once they entered government, banning them from directly handling issues on which they had lobbied.

But the administration may not be even following that.

We’ve found three hires announced this week who, in fact, are working on the same issues on which they were registered lobbyists while in the private sector.

The article is here.

Saturday, March 11, 2017

The Moral and Legal Permissibility of Placebo-Controlled Trials

Mina Henaen
Princeton Journal of Bioethics
Princeton University
Originally posted August 15, 2016

Leaders of research ethics organizations have made placebo-controlled trials illegal whenever placebo groups would not receive currently existing treatment for their ailment, slowing down research for cheaper and more effective treatments. In this essay, I argue that placebo-controlled trials (PCTs) are both morally and legally permissible whenever they provide care that is better than the local standard of care. Contrary to what the anti-PCT often put forth, I argue that researchers conducting PCTs are not exploiting other developing nations, or subjects from these nations, when they conduct their research there. I then show how these researchers are also not especially legally required to provide treatment to their placebo-group subjects. I present some of the benefits of such research to the placebo groups as well and consider the moral impermissibility of making such research illegal.

The article is here.

Friday, March 10, 2017

Why genetic testing for genes for criminality is morally required

Julian Savulescu
Princeton Journal of Bioethics [2001, 4:79-97]

Abstract

This paper argues for a Principle of Procreative Beneficence, that couples (or single reproducers) should select the child, of the possible children they could have, who is expected to have the best life, or at least as good a life as the others. If there are a number of different variants of a given gene, then we have most reason to select embryos which have those variants which are associated with the best lives, that is, those lives with the highest levels of well-being. It is possible that in the future some genes are identified which make it more likely that a person will engage in criminal behaviour. If that criminal behaviour makes that person's life go worse (as it plausibly would), and if those genes do not have other good effects in terms of promoting well-being, then we have a strong reason to encourage couples to test their embryos with the most favourable genetic profile. This paper was derived from a talk given as a part of the Decamp Seminar Series at the Princeton University Center for Human Values, October 4, 2000.

The article is here.

A Hippocratic Oath for AI Developers?

Benedict Dellot
RSA.org
Originally posted February 13, 2017

Here is an excerpt:

The largest tech companies – Apple, Amazon, Google, IBM, Microsoft and Facebook – have already committed to creating new standards to guide the development of artificial intelligence. Likewise, a recent EU Parliament investigation recommended the development of an advisory code for robotic engineers, as well as ‘electronic personhood’ for the most sophisticated robots to ensure their behaviour is captured by legal systems.

Other ideas include regulatory ‘sandboxes’ that would give AI developers more freedom to experiment but under the close supervision of the authorities, and ‘software deposits’ for private code that would allow consumer rights organisations and government inspectors the opportunity to audit algorithms behind closed doors. Darpa recently kicked off a new programme called Explainable AI (XAI), which aims to create machine learning systems that can explain the steps they take to arrive at a decision, as well as unpack the strengths and weaknesses of their conclusions.

There have even been calls to instate a Hippocratic Oath for AI developers. This would have the advantage of going straight to the source of potential issues – the people who write the code – rather than relying on the resources, skills and time of external enforcers. An oath might also help to concentrate the minds of the programming community as a whole in getting to grips with the above dilemmas. Inspiration can be taken from the way the IEEE, a technical professional association in the US, has begun drafting a framework for the ‘ethically aligned design’ of AI.

The article is here.

Thursday, March 9, 2017

Florida Doctors May Discuss Guns With Patients, Court Rules

 Lizette Alvarez
The New York Times
Originally posted February

Here is an excerpt:

A federal appeals court cleared the way on Thursday for Florida doctors to talk to their patients about gun safety, overturning a 2011 law that pitted medical providers against the state's powerful gun lobby.

In its 10-to-1 ruling, the full panel of the United States Circuit Court of Appeals for the 11th Circuit concluded that doctors could not be threatened with losing their license for asking patients if they owned guns and for discussing gun safety because to do so would violate their free speech.

"Florida does not have carte blanche to restrict the speech of doctors and medical professionals on a certain subject without satisfying the demands of heightened scrutiny," the majority wrote in its decision. In its lawsuit, the medical community argued that questions about gun storage were crucial to public health because of the relationship between firearms and both the suicide rate and the gun-related deaths of children.

A number of doctors and medical organizations sued Florida in a case that came to be known as Docs v. Glocks, after the popular handgun.

The article is here.

Why You Should Donate Your Medical Data When You Die

By David Martin Shaw, J. Valérie Gross, Thomas C. Erren
The Conversation on February 16, 2017

Here is an excerpt:

But organs aren’t the only thing that you can donate once you’re dead. What about donating your medical data?

Data might not seem important in the way that organs are. People need organs just to stay alive, or to avoid being on dialysis for several hours a day. But medical data are also very valuable—even if they are not going to save someone’s life immediately. Why? Because medical research cannot take place without medical data, and the sad fact is that most people’s medical data are inaccessible for research once they are dead.

For example, working in shifts can be disruptive to one’s circadian rhythms. This is now thought by some to probably cause cancer. A large cohort study involving tens or hundreds of thousands of individuals could help us to investigate different aspects of shift work, including chronobiology, sleep impairment, cancer biology and premature aging. The results of such research could be very important for cancer prevention. However, any such study could currently be hamstrung by the inability to access and analyze participants’ data after they die.

The article is here.

Wednesday, March 8, 2017

The Moral Insignificance of Self-consciousness

Joshua Shepherd
European Journal of Philosophy
First published February 2, 2017

Abstract

In this paper, I examine the claim that self-consciousness is highly morally significant, such that the fact that an entity is self-conscious generates strong moral reasons against harming or killing that entity. This claim is apparently very intuitive, but I argue it is false. I consider two ways to defend this claim: one indirect, the other direct. The best-known arguments relevant to self-consciousness's significance take the indirect route. I examine them and argue that (a) in various ways they depend on unwarranted assumptions about self-consciousness's functional significance, and (b) once these assumptions are undermined, motivation for these arguments dissipates. I then consider the direct route to self-consciousness's significance, which depends on claims that self-consciousness has intrinsic value or final value. I argue what intrinsic or final value self-consciousness possesses is not enough to generate strong moral reasons against harming or killing.

The article is here.

A Computer to Rival

Kelly Clancy  
The New Yorker
February 15, 2017

Here is an excerpt:

Computers are often likened to brains, but they work in a manner foreign to biology. The computing architecture still in use today was first described by the mathematician John von Neumann and his colleagues in 1945. A modern laptop is conceptually identical to the punch-card behemoths of the past, although engineers have traded paper for a purely electric stream of on-off signals. In a von Neumann machine, all data-crunching happens in the central processing unit (C.P.U.). Program instructions, then data, flow from the computer’s memory to its C.P.U. in an orderly series of zeroes and ones, much like a stack of punch cards shuffling through. Although multicore computers allow some processing to occur in parallel, their efficacy is limited: software engineers must painstakingly choreograph these streams of information to avoid catastrophic system errors. In the brain, by contrast, data run simultaneously through billions of parallel processors—that is, our neurons. Like computers, they communicate in a binary language of electrical spikes. The difference is that each neuron is pre-programmed, whether through genetic patterning or learned associations, to share its computations directly with the proper targets. Processing unfolds organically, without the need for a C.P.U.

The article is here.

Note: Consciousness is a product of evolution. Artificial intelligence is a product of evolved brains.

Tuesday, March 7, 2017

Chimpanzees’ Bystander Reactions to Infanticide

Claudia Rudolf von Rohr, Carel P. van Schaik, Alexandra Kissling, & Judith M. Burkart
Human Nature
June 2015, Volume 26, Issue 2, pp 143–160

Abstract

Social norms—generalized expectations about how others should behave in a given context—implicitly guide human social life. However, their existence becomes explicit when they are violated because norm violations provoke negative reactions, even from personally uninvolved bystanders. To explore the evolutionary origin of human social norms, we presented chimpanzees with videos depicting a putative norm violation: unfamiliar conspecifics engaging in infanticidal attacks on an infant chimpanzee. The chimpanzees looked far longer at infanticide scenes than at control videos showing nut cracking, hunting a colobus monkey, or displays and aggression among adult males. Furthermore, several alternative explanations for this looking pattern could be ruled out. However, infanticide scenes did not generally elicit higher arousal. We propose that chimpanzees as uninvolved bystanders may detect norm violations but may restrict emotional reactions to such situations to in-group contexts. We discuss the implications for the evolution of human morality.

The article is here.

Experiments suggest dogs and monkeys have a human-like sense of morality

Bob Yirka
Phys.org
Originally posted February 15, 2017

A team of researchers from Kyoto University has found that dogs and capuchin monkeys watch how humans interact with one another and react less positively to those that are less willing to help or share. In their paper published in the journal Neuroscience & Biobehavioral Reviews, the team describes a series of experiments they carried out with several dogs and capuchin monkeys and what they discovered about both species social preferences.

The article is here.

Target Article:

James R. Anderson et al, Third-party social evaluations of humans by monkeys and dogs, Neuroscience & Biobehavioral Reviews (2017).
DOI: 10.1016/j.neubiorev.2017.01.003

Monday, March 6, 2017

Almost All Of You Would Cheat And Steal If The People In Charge Imply It's Okay

Charlie Sorrel
www.fastcoexist.com
Originally posted February 2, 2017

Would you cheat on a test to get money? Would you steal from an envelope of cash if you thought nobody would notice? What if the person in charge implied that it was acceptable to lie and steal? That's what Dan Ariely's Corruption Experiment set out to discover. And here's a spoiler: If you're like the rest of the population, you would cheat and steal.

Ariely is a behavioral scientist who specializes in the depressingly bad conduct of humans. In this lecture clip, he details his Corruption Experiment. In it, participants are given a die, and told they can take home the numbers they throw in real dollars. The twist is that they can choose the number on the top or the bottom, and they only need to tell the person running the experiment after they throw. So, if the dice comes up with a one on top, they can claim that they picked the six on the bottom. Not surprisingly, most of the time, people picked the higher number.

The article and the video is here.

Cultivating Moral Resilience

Cynda Rushton
American Journal of Nursing:
February 2017 - Volume 117 - Issue 2 - p S11–S15
doi: 10.1097/01.NAJ.0000512205.93596.00

Here is an excerpt:

To derive meaning from moral distress, one must first change the relationship with the suffering that it causes. Human beings have the potential to consciously decide what mindset they will bring to a given situation; they have the option to choose a path of mindful awareness and inquiry over one of helplessness and frustration. When people are mired in the “judger pit,” the tone of their conversation is punctuated by negativity, closed thinking, and judgment of themselves and others.40 Alternatively, when in an inquiring mindset, they are more inclined to remain positive—despite their distress—and are able to ask questions that may help reveal unknown or overlooked possibilities.

Shifting the focus from helplessness to resilience offers promising possibilities in designing interventions to help mitigate the effects of moral distress. Resilience—an umbrella concept that has been applied in diverse fields of study—can be psychological, physiologic, genetic, sociologic, organizational or communal, or moral. Although there is no unifying definition, resilience generally refers to the ability to recover from or healthfully adapt to challenges, stress, adversity, or trauma. One definition characterizes it as “the process of harnessing biological, psychosocial, structural, and cultural resources to sustain wellbeing.”

Psychological resilience, for example, “involves the creation of meaning in life, even life that is sometimes painful or absurd, and having the courage to live life fully despite its inherent pain and futility.”

The article is here.

Sunday, March 5, 2017

What We Know About Moral Distress

Patricia Rodney
AJN, American Journal of Nursing:
February 2017 - Volume 117 - Issue 2 - p S7–S10
doi: 10.1097/01.NAJ.0000512204.85973.04

Moral distress arises when nurses are unable to act according to their moral judgment. The concept is relatively recent, dating to American ethicist Andrew Jameton's 1984 landmark text on nursing ethics. Until that point, distress among clinicians had been understood primarily through psychological concepts such as stress and burnout, which, although relevant, were not sufficient. With the introduction of the concept of moral distress, Jameton added an ethical dimension to the study of distress.

Background

In the 33 years since Jameton's inaugural work, many nurses, inspired by the concept of moral distress, have continued to explore what happens when nurses are constrained from translating moral choice into moral action, and are consequently unable to uphold their sense of integrity and the values emphasized in the American Nurses Association's Code of Ethics for Nurses with Interpretive Statements. Moral distress might occur when, say, a nurse on a busy acute medical unit can't provide comfort and supportive care to a dying patient because of insufficient staffing.

The article is here.

Saturday, March 4, 2017

JAMA Forum: Those Pesky Lines Around States

Larry Levitt
JAMA Forum Blog
Originally posted October 19, 2016

Here is an excerpt:

Allowing insurers to then sell plans across state lines would actually worsen access to coverage for people with preexisting conditions, since insurers would have a strong incentive to set up shop in states with minimal regulation, undermining the ability of other states to enact stricter rules.

Let’s say Delaware wanted to attract health insurance jobs to its state with industry-friendly regulations—for example, no required benefits (such as preventive services or maternity care) and no restrictions on medical underwriting (meaning people with preexisting conditions could be denied coverage). Insurers operating out of Delaware could offer cheaper health insurance by cherry-picking healthy enrollees from other states. If New York tried to require insurers to expand access to people with preexisting conditions or mandate specific benefits, its carriers would get stuck with disproportionately sick people.

Delaware is not a random example here. This is exactly what happened in the credit card industry after the Supreme Court ruled in 1978 that credit card companies could follow interest rate rules in the states where they operate, not the state where the cardholder lives. Two states—Delaware and South Dakota—moved quickly to deregulate interest rates, and banks followed suit by moving their credit card operations to those states. By 1997 Delaware had 43% of the nation’s credit card volume.

The blog post is here.

How ‘Intellectual Humility’ Can Make You a Better Person

Cindy Lamothe
The Science of Us
Originally posted February 3, 2017

There’s a well-known Indian parable about six blind men who argue at length about what an elephant feels like. Each has a different idea, and each holds fast to his own view. “It’s like a rope,” says the man who touched the tail. “Oh no, it’s more like the solid branch of a tree,” contends the one who touched the trunk. And so on and so forth, and round and round they go.

The moral of the story: We all have a tendency to overestimate how much we know — which, in turn, means that we often cling stubbornly to our beliefs while tuning out opinions different from our own. We generally believe we’re better or more correct than everyone else, or at least better than most people — a psychological quirk that’s as true for politics and religion as it is for things like fashion and lifestyles. And in a time when it seems like we’re all more convinced than ever of our own rightness, social scientists have begun to look more closely at an antidote: a concept called intellectual humility.

Unlike general humility — which is defined by traits like sincerity, honesty, and unselfishness — intellectual humility has to do with understanding the limits of one’s knowledge. It’s a state of openness to new ideas, a willingness to be receptive to new sources of evidence, and it comes with significant benefits: People with intellectual humility are both better learners and better able to engage in civil discourse. Google’s VP in charge of hiring, Laszlo Bock, has claimed it as one of the top qualities he looks for in a candidate: Without intellectual humility, he has said, “you are unable to learn.”

The article is here.

Friday, March 3, 2017

California Regulator Slams Health Insurers Over Faulty Doctor Lists

Chad Terhune
Kaiser Health News
Originally published February 13, 2017

California’s biggest health insurers reported inaccurate information to the state on which doctors are in their networks, offering conflicting lists that differed by several thousand physicians, according to a new state report.

Shelley Rouillard, director of the California Department of Managed Health Care, said 36 of 40 health insurers she reviewed — including industry giants like Aetna and UnitedHealthcare — could face fines for failing to submit accurate data or comply with state rules.

Rouillard said she told health plan executives in a meeting last week that such widespread errors made it impossible for regulators to tell whether patients have timely access to care in accordance with state law.

“I told the CEOs it looks to me like nobody cared. We will be holding their feet to the fire on this,” Rouillard said in an interview with California Healthline. “I am frustrated with the health plans because the data we got was unacceptable. It was a mess.”

The article is here.

Doctors suffer from the same cognitive distortions as the rest of us

Michael Lewis
Nautilus
Originally posted February 9, 2017

Here are two excerpts:

What struck Redelmeier wasn’t the idea that people made mistakes. Of course people made mistakes! What was so compelling is that the mistakes were predictable and systematic. They seemed ingrained in human nature. One passage in particular stuck with him—about the role of the imagination in human error. “The risk involved in an adventurous expedition, for example, is evaluated by imagining contingencies with which the expedition is not equipped to cope,” the authors wrote. “If many such difficulties are vividly portrayed, the expedition can be made to appear exceedingly dangerous, although the ease with which disasters are imagined need not reflect their actual likelihood. Conversely, the risk involved in an undertaking may be grossly underestimated if some possible dangers are either difficult to conceive of, or simply do not come to mind.” This wasn’t just about how many words in the English language started with the letter K. This was about life and death.

(cut)

Toward the end of their article in Science, Daniel Kahneman and Amos Tversky had pointed out that, while statistically sophisticated people might avoid the simple mistakes made by less savvy people, even the most sophisticated minds were prone to error. As they put it, “their intuitive judgments are liable to similar fallacies in more intricate and less transparent problems.” That, the young Redelmeier realized, was a “fantastic rationale why brilliant physicians were not immune to these fallibilities.” Error wasn’t necessarily shameful; it was merely human. “They provided a language and a logic for articulating some of the pitfalls people encounter when they think. Now these mistakes could be communicated. It was the recognition of human error. Not its denial. Not its demonization. Just the understanding that they are part of human nature.”

The article is here.

Thursday, March 2, 2017

Jail cells await mentally ill in Rapid City

Mike Anderson
Rapid City Journal
Originally published February 7, 2017

Mentally ill people in Rapid City who have committed no crimes will probably end up in jail because of a major policy change recently announced by Rapid City Regional Hospital.

The hospital is no longer taking in certain types of mentally ill patients and will instead contact the Pennington County Sheriff’s Office to take them into custody.

The move has prompted criticism from local law enforcement officials, who say the decision was made suddenly and without their input.

“In my view, this is the biggest step backward our community has experienced in terms of health care for mental health patients,” said Rapid City police Chief Karl Jegeris. “And though it’s legally permissible by statute to put someone in an incarceration setting, it doesn’t mean that it’s the right thing to do.”

This is the second major policy change to come out of Regional in recent days that places limits on the type of mental health care the hospital will provide.

The article is here.

Pornography and the Philosophy of Fiction

John Danaher
Philosophical Disquisitions
Originally published February 9, 2017

Here are two excerpts:

Pornography is now ubiquitous. If you have an internet connection, you have access to a virtually inexhaustible supply of the stuff. Debates rage over whether this is a good or bad thing. There are long-standing research programmes in psychology and philosophy that focus on the ethical and social consequences of exposure to pornography. These debates often raise important questions about human sexuality, gender equality, sexual aggression and violence. They also often touch upon (esoteric) aspects of the philosophy of speech acts and freedom of expression. Noticeably neglected in the debate is any discussion of the fictional nature of pornography and how it affects its social reception.

That, at any rate, is the claim made by Shen-yi Liao and Sara Protasi in their article ‘The Fictional Character of Pornography’. In it, they draw upon a number of ideas in the philosophy of aesthetics in an effort to refine the arguments made by participants in the pornography debate.

(cut)

The more important part of the definition concerns the prompting of imagination. Liao and Protasi have a longish argument in their paper as to why sexual desire (as an appetite) involves imagination and hence why pornographic representations often prompt imaginings. That argument is interesting, but I’m going to skip over the details here. The important point is that in satisfying our sexual appetites we often engage the imagination (imagining certain roles or actions). Indeed, the sexual appetite might be unique among appetites as being the one that can be satisfied purely through the imagination. Furthermore, the typical user of pornography will often engage their imaginations when using it. They will imagine themselves being involved (directly or indirectly) in the represented sexual acts.

The blog post is here.

Wednesday, March 1, 2017

Clinicians’ Expectations of the Benefits and Harms of Treatments, Screening, and Tests

Tammy C. Hoffmann & Chris Del Mar
JAMA Intern Med. 
Published online January 9, 2017.
doi:10.1001/jamainternmed.2016.8254

Question

Do clinicians have accurate expectations of the benefits and harms of treatments, tests, and screening tests?

Findings

In this systematic review of 48 studies (13 011 clinicians), most participants correctly estimated 13% of the 69 harm expectation outcomes and 11% of the 28 benefit expectations. The majority of participants overestimated benefit for 32% of outcomes, underestimated benefit for 9%, underestimated harm for 34%, and overestimated harm for 5% of outcomes.

Meaning

Clinicians rarely had accurate expectations of benefits or harms, with inaccuracies in both directions, but more often underestimated harms and overestimated benefits.

The research is here.

Should healthcare professionals sometimes allow harm? The case of self-injury

Patrick J Sullivan
Journal of Medical Ethics 
Published Online First: 09 February 2017.
doi: 10.1136/medethics-2015-103146

Abstract

This paper considers the ethical justification for the use of harm minimisation approaches with individuals who self-injure. While the general issues concerning harm minimisation have been widely debated, there has been only limited consideration of the ethical issues raised by allowing people to continue injuring themselves as part of an agreed therapeutic programme. I will argue that harm minimisation should be supported on the basis that it results in an overall reduction in harm when compared with more traditional ways of dealing with self-injurious behaviour. It will be argued that this is an example of a situation where healthcare professionals sometimes have a moral obligation to allow harm to come to their patients.

The article is here.