"Living a fully ethical life involves doing the most good we can." - Peter Singer
"Common sense is not so common." - Voltaire
“There are two ways to be fooled. One is to believe what isn't true; the other is to refuse to believe what is true.” ― Søren Kierkegaard

Friday, April 28, 2017

How rational is our rationality?

Interview by Richard Marshall
3 AM Magazine
Originally posted March 18, 2017

Here is an excerpt:

As I mentioned earlier, I think that the point of the study of rationality, and of normative epistemology more generally, is to help us figure out how to inquire, and the aim of inquiry, I believe, is to get at the truth. This means that there had better be a close connection between what we conclude about what’s rational to believe, and what we expect to be true. But it turns out to be very tricky to say what the nature of this connection is! For example, we know that sometimes evidence can mislead us, and so rational beliefs can be false. This means that there’s no guarantee that rational beliefs will be true. The goal of the paper is to get clear about why, and to what extent, it nonetheless makes sense to expect that rational beliefs will be more accurate than irrational ones. One reason this should be of interest to non-philosophers is that if it turns out that there isn’t some close connection between rationality and truth, then we should be much less critical of people with irrational beliefs. They may reasonably say: “Sure, my belief is irrational – but I care about the truth, and since my irrational belief is true, I won’t abandon it!” It seems like there’s something wrong with this stance, but to justify why it’s wrong, we need to get clear on the connection between a judgment about a belief’s rationality and a judgment about its truth. The account I give is difficult to summarize in just a few sentences, but I can say this much: what we say about the connection between what’s rational and what’s true will depend on whether we think it’s rational to doubt our own rationality. If it can be rational to doubt our own rationality (which I think is plausible), then the connection between rationality and truth is, in a sense, surprisingly tenuous.

The interview is here.

First, do no harm: institutional betrayal and trust in health care organizations

Carly Parnitzke Smith
The Journal of Multidisciplinary Healthcare
April, 2017; Volume 10; Pages 133-144


Patients’ trust in health care is increasingly recognized as important to quality care, yet questions remain about what types of health care experiences erode trust. The current study assessed the prevalence and impact of institutional betrayal on patients’ trust and engagement in health care.

Participants and methods:

Participants who had sought health care in the US in October 2013 were recruited from an online marketplace, Amazon’s Mechanical Turk. Participants (n = 707; 73% Caucasian; 56.8% female; 9.8% lesbian, gay, or bisexual; median age between 18 and 35 years) responded to survey questions about health care use, trust in health care providers and organizations, negative medical experiences, and institutional betrayal.


Institutional betrayal was reported by two-thirds of the participants and predicted disengagement from health care (r = 0.36, p < 0.001). Mediational models (tested using bootstrapping analyses) indicated a negative, nonzero pathway between institutional betrayal and trust in health care organizations (b = -0.05, 95% confidence interval [CI] = [-0.07, -0.02]), controlling for trust in physicians and hospitalization history. These negative effects were not buffered by trust in one’s own physician, but in fact patients who trusted their physician more reported lower trust in health care organizations following negative medical events (interaction b = -0.02, 95%CI = [-0.03, -0.01]).


Clinical implications are discussed, concluding that institutional betrayal decreases patient trust and engagement in health care.

The article is here.

Thursday, April 27, 2017

Groups File Ethics Complaints Over State Department’s Mar-a-Lago Blog Post

Avalon Zoppo and Abigail Williams
Originally posted April 25, 2017

An ethics advocacy group has filed a complaint calling for an investigation into the State Department's glowing description of President Donald Trump's Mar-a-Lago club on its website.

The complaint, filed Tuesday with the Office of Government Ethics by the group Common Cause, is in response to a blog post published on the State Department's ShareAmerica website that referred to Mar-a-Lago as the "winter White House" and noted that it is open to paying members.

Published in early April, prior to a meeting with China's president Xi Jinping at the Palm Beach club, the post detailed the history of Mar-a-Lago and appeared on websites for the U.S. Embassies in the United Kingdom and Albania.

By Monday the post was removed, replaced by a brief note that said it was only meant to inform. "We regret any misperception and have removed the post," the note said. State Department Acting Spokesperson Mark Toner said Tuesday it was not intended to promote any private business.

The article is here.

Does studying ethics affect moral views? An application to economic justice

James Konow
Journal of Economic Methodology
Published online: 05 Apr 2017


Recent years have witnessed a rapid increase in initiatives to expand ethics instruction in higher education. Numerous empirical studies have examined the possible effects on students of discipline-based ethics instruction, such business ethics and medical ethics. Nevertheless, the largest share of college ethics instruction has traditionally fallen to philosophy departments, and there is a paucity of empirical research on the individual effects of that approach. This paper examines possible effects of exposure to readings and lectures in mandatory philosophy classes on student views of morality. Specifically, it focuses on an ethical topic of importance to both economics and philosophy, viz. economic (or distributive) justice. The questionnaire study is designed to avoid features suspected of generating false positives in past research while calibrating the measurement so as to increase the likelihood of detecting even a modest true effect. The results provide little evidence that the philosophical ethics approach studied here systematically affects the fairness views of students. The possible implications for future research and for ethics instruction are briefly discussed.

The article is here.

Wednesday, April 26, 2017

Living a lie: We deceive ourselves to better deceive others

Matthew Hutson
Scientific American
Originally posted April 8, 2017

People mislead themselves all day long. We tell ourselves we’re smarter and better looking than our friends, that our political party can do no wrong, that we’re too busy to help a colleague. In 1976, in the foreword to Richard Dawkins’s “The Selfish Gene,” the biologist Robert Trivers floated a novel explanation for such self-serving biases: We dupe ourselves in order to deceive others, creating social advantage. Now after four decades Trivers and his colleagues have published the first research supporting his idea.

Psychologists have identified several ways of fooling ourselves: biased information-gathering, biased reasoning and biased recollections. The new work, forthcoming in the Journal of Economic Psychology, focuses on the first — the way we seek information that supports what we want to believe and avoid that which does not.

The article is here.

Moral judging helps people cooperate better in groups

Science Blog
Originally posted April 7, 2017

Here is an excerpt:

“Generally, people think of moral judgments negatively,” Willer said. “But they are a critical means for encouraging good behavior in society.”

Researchers also found that the groups who were allowed to make positive or negative judgments of each other were more trusting and generous toward each other.

In addition, the levels of cooperation in such groups were found to be comparable with groups where monetary punishments were used to promote collaboration within the group, according to the study, titled “The Enforcement of Moral Boundaries Promotes Cooperation and Prosocial Behavior in Groups.”

The power of social approval

The idea that moral judgments are fundamental to social order has been around since the late 19th century. But most existing research has looked at moral reasoning and judgments as an internal psychological process.

Few studies so far have examined how costless expressions of liking or disapproval can affect individual behavior in groups, and none of these studies investigated how moral judgments compare with monetary sanctions, which have been shown to lead to increased cooperation as well, Willer said.

The article is here.

Tuesday, April 25, 2017

Artificial synapse on a chip will help mobile devices learn like the human brain

Luke Dormehl
Digital Trends
Originally posted April 6, 2017

Brain-inspired deep learning neural networks have been behind many of the biggest breakthroughs in artificial intelligence seen over the past 10 years.

But a new research project from the National Center for Scientific Research (CNRS), the University of Bordeaux, and Norwegian information technology company Evry could take that these breakthroughs to next level — thanks to the creation of an artificial synapse on a chip.

“There are many breakthroughs from software companies that use algorithms based on artificial neural networks for pattern recognition,” Dr. Vincent Garcia, a CNRS research scientist who worked on the project, told Digital Trends. “However, as these algorithms are simulated on standard processors they require a lot of power. Developing artificial neural networks directly on a chip would make this kind of tasks available to everyone, and much more power efficient.”

Synapses in the brain function as the connections between neurons. Learning takes place when these connections are reinforced, and improved when synapses are stimulated. The newly developed electronic devices (called “memristors”) emulate the behavior of these synapses, by way of a variable resistance that depends on the history of electronic excitations they receive.

The article is here.

Can Robots Be Ethical?

Robert Newman
Philosophy Now
Apr/May 2017 Issue 119

Here is an excerpt:

Delegating ethics to robots is unethical not just because robots do binary code, not ethics, but also because no program could ever process the incalculable contingencies, shifting subtleties, and complexities entailed in even the simplest case to be put before a judge and jury. And yet the law is another candidate for outsourcing, to ‘ethical’ robot lawyers. Last year, during a BBC Radio 4 puff-piece on the wonders of robotics, a senior IBM executive explained that while robots can’t do the fiddly manual jobs of gardeners or janitors, they can easily do all that lawyers do, and will soon make human lawyers redundant. However, when IBM Vice President Bob Moffat was himself on trial in the Manhattan Federal Court, accused of the largest hedge fund insider-trading in history, he inexplicably reposed all his hopes in one of those old-time human defence attorneys. A robot lawyer may have saved him from being found guilty of two counts of conspiracy and fraud, but when push came to shove, the IBM VP knew as well as the rest of us that the phrase ‘ethical robots’ is a contradiction in terms.

The article is here.

Monday, April 24, 2017

How Flawed Science Is Undermining Good Medicine

Morning Edition
Originally posted April 6, 2017

Here is an excerpt:

A surprising medical finding caught the eye of NPR's veteran science correspondent Richard Harris in 2014. A scientist from the drug company Amgen had reviewed the results of 53 studies that were originally thought to be highly promising — findings likely to lead to important new drugs. But when the Amgen scientist tried to replicate those promising results, in most cases he couldn't.

"He tried to reproduce them all," Harris tells Morning Edition host David Greene. "And of those 53, he found he could only reproduce six."

That was "a real eye-opener," says Harris, whose new book Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions explores the ways even some talented scientists go wrong — pushed by tight funding, competition and other constraints to move too quickly and sloppily to produce useful results.

"A lot of what everybody has reported about medical research in the last few years is actually wrong," Harris says. "It seemed right at the time but has not stood up to the test of time."

The impact of weak biomedical research can be especially devastating, Harris learned, as he talked to doctors and patients. And some prominent scientists he interviewed told him they agree that it's time to recognize the dysfunction in the system and fix it.

The article is here.