"Living a fully ethical life involves doing the most good we can." - Peter Singer
"Common sense is not so common." - Voltaire
“There are two ways to be fooled. One is to believe what isn't true; the other is to refuse to believe what is true.” ― Søren Kierkegaard

Friday, April 28, 2017

How rational is our rationality?

Interview by Richard Marshall
3 AM Magazine
Originally posted March 18, 2017

Here is an excerpt:

As I mentioned earlier, I think that the point of the study of rationality, and of normative epistemology more generally, is to help us figure out how to inquire, and the aim of inquiry, I believe, is to get at the truth. This means that there had better be a close connection between what we conclude about what’s rational to believe, and what we expect to be true. But it turns out to be very tricky to say what the nature of this connection is! For example, we know that sometimes evidence can mislead us, and so rational beliefs can be false. This means that there’s no guarantee that rational beliefs will be true. The goal of the paper is to get clear about why, and to what extent, it nonetheless makes sense to expect that rational beliefs will be more accurate than irrational ones. One reason this should be of interest to non-philosophers is that if it turns out that there isn’t some close connection between rationality and truth, then we should be much less critical of people with irrational beliefs. They may reasonably say: “Sure, my belief is irrational – but I care about the truth, and since my irrational belief is true, I won’t abandon it!” It seems like there’s something wrong with this stance, but to justify why it’s wrong, we need to get clear on the connection between a judgment about a belief’s rationality and a judgment about its truth. The account I give is difficult to summarize in just a few sentences, but I can say this much: what we say about the connection between what’s rational and what’s true will depend on whether we think it’s rational to doubt our own rationality. If it can be rational to doubt our own rationality (which I think is plausible), then the connection between rationality and truth is, in a sense, surprisingly tenuous.

The interview is here.

First, do no harm: institutional betrayal and trust in health care organizations

Carly Parnitzke Smith
The Journal of Multidisciplinary Healthcare
April, 2017; Volume 10; Pages 133-144

Purpose:

Patients’ trust in health care is increasingly recognized as important to quality care, yet questions remain about what types of health care experiences erode trust. The current study assessed the prevalence and impact of institutional betrayal on patients’ trust and engagement in health care.

Participants and methods:

Participants who had sought health care in the US in October 2013 were recruited from an online marketplace, Amazon’s Mechanical Turk. Participants (n = 707; 73% Caucasian; 56.8% female; 9.8% lesbian, gay, or bisexual; median age between 18 and 35 years) responded to survey questions about health care use, trust in health care providers and organizations, negative medical experiences, and institutional betrayal.

Results:

Institutional betrayal was reported by two-thirds of the participants and predicted disengagement from health care (r = 0.36, p < 0.001). Mediational models (tested using bootstrapping analyses) indicated a negative, nonzero pathway between institutional betrayal and trust in health care organizations (b = -0.05, 95% confidence interval [CI] = [-0.07, -0.02]), controlling for trust in physicians and hospitalization history. These negative effects were not buffered by trust in one’s own physician, but in fact patients who trusted their physician more reported lower trust in health care organizations following negative medical events (interaction b = -0.02, 95%CI = [-0.03, -0.01]).

Conclusion:

Clinical implications are discussed, concluding that institutional betrayal decreases patient trust and engagement in health care.

The article is here.

Thursday, April 27, 2017

Groups File Ethics Complaints Over State Department’s Mar-a-Lago Blog Post

Avalon Zoppo and Abigail Williams
NBC.com
Originally posted April 25, 2017

An ethics advocacy group has filed a complaint calling for an investigation into the State Department's glowing description of President Donald Trump's Mar-a-Lago club on its website.

The complaint, filed Tuesday with the Office of Government Ethics by the group Common Cause, is in response to a blog post published on the State Department's ShareAmerica website that referred to Mar-a-Lago as the "winter White House" and noted that it is open to paying members.

Published in early April, prior to a meeting with China's president Xi Jinping at the Palm Beach club, the post detailed the history of Mar-a-Lago and appeared on websites for the U.S. Embassies in the United Kingdom and Albania.

By Monday the post was removed, replaced by a brief note that said it was only meant to inform. "We regret any misperception and have removed the post," the note said. State Department Acting Spokesperson Mark Toner said Tuesday it was not intended to promote any private business.

The article is here.

Does studying ethics affect moral views? An application to economic justice

James Konow
Journal of Economic Methodology
Published online: 05 Apr 2017

Abstract

Recent years have witnessed a rapid increase in initiatives to expand ethics instruction in higher education. Numerous empirical studies have examined the possible effects on students of discipline-based ethics instruction, such business ethics and medical ethics. Nevertheless, the largest share of college ethics instruction has traditionally fallen to philosophy departments, and there is a paucity of empirical research on the individual effects of that approach. This paper examines possible effects of exposure to readings and lectures in mandatory philosophy classes on student views of morality. Specifically, it focuses on an ethical topic of importance to both economics and philosophy, viz. economic (or distributive) justice. The questionnaire study is designed to avoid features suspected of generating false positives in past research while calibrating the measurement so as to increase the likelihood of detecting even a modest true effect. The results provide little evidence that the philosophical ethics approach studied here systematically affects the fairness views of students. The possible implications for future research and for ethics instruction are briefly discussed.

The article is here.

Wednesday, April 26, 2017

Living a lie: We deceive ourselves to better deceive others

Matthew Hutson
Scientific American
Originally posted April 8, 2017

People mislead themselves all day long. We tell ourselves we’re smarter and better looking than our friends, that our political party can do no wrong, that we’re too busy to help a colleague. In 1976, in the foreword to Richard Dawkins’s “The Selfish Gene,” the biologist Robert Trivers floated a novel explanation for such self-serving biases: We dupe ourselves in order to deceive others, creating social advantage. Now after four decades Trivers and his colleagues have published the first research supporting his idea.

Psychologists have identified several ways of fooling ourselves: biased information-gathering, biased reasoning and biased recollections. The new work, forthcoming in the Journal of Economic Psychology, focuses on the first — the way we seek information that supports what we want to believe and avoid that which does not.

The article is here.

Moral judging helps people cooperate better in groups

Science Blog
Originally posted April 7, 2017

Here is an excerpt:

“Generally, people think of moral judgments negatively,” Willer said. “But they are a critical means for encouraging good behavior in society.”

Researchers also found that the groups who were allowed to make positive or negative judgments of each other were more trusting and generous toward each other.

In addition, the levels of cooperation in such groups were found to be comparable with groups where monetary punishments were used to promote collaboration within the group, according to the study, titled “The Enforcement of Moral Boundaries Promotes Cooperation and Prosocial Behavior in Groups.”

The power of social approval

The idea that moral judgments are fundamental to social order has been around since the late 19th century. But most existing research has looked at moral reasoning and judgments as an internal psychological process.

Few studies so far have examined how costless expressions of liking or disapproval can affect individual behavior in groups, and none of these studies investigated how moral judgments compare with monetary sanctions, which have been shown to lead to increased cooperation as well, Willer said.

The article is here.

Tuesday, April 25, 2017

Artificial synapse on a chip will help mobile devices learn like the human brain

Luke Dormehl
Digital Trends
Originally posted April 6, 2017

Brain-inspired deep learning neural networks have been behind many of the biggest breakthroughs in artificial intelligence seen over the past 10 years.

But a new research project from the National Center for Scientific Research (CNRS), the University of Bordeaux, and Norwegian information technology company Evry could take that these breakthroughs to next level — thanks to the creation of an artificial synapse on a chip.

“There are many breakthroughs from software companies that use algorithms based on artificial neural networks for pattern recognition,” Dr. Vincent Garcia, a CNRS research scientist who worked on the project, told Digital Trends. “However, as these algorithms are simulated on standard processors they require a lot of power. Developing artificial neural networks directly on a chip would make this kind of tasks available to everyone, and much more power efficient.”

Synapses in the brain function as the connections between neurons. Learning takes place when these connections are reinforced, and improved when synapses are stimulated. The newly developed electronic devices (called “memristors”) emulate the behavior of these synapses, by way of a variable resistance that depends on the history of electronic excitations they receive.

The article is here.

Can Robots Be Ethical?

Robert Newman
Philosophy Now
Apr/May 2017 Issue 119

Here is an excerpt:

Delegating ethics to robots is unethical not just because robots do binary code, not ethics, but also because no program could ever process the incalculable contingencies, shifting subtleties, and complexities entailed in even the simplest case to be put before a judge and jury. And yet the law is another candidate for outsourcing, to ‘ethical’ robot lawyers. Last year, during a BBC Radio 4 puff-piece on the wonders of robotics, a senior IBM executive explained that while robots can’t do the fiddly manual jobs of gardeners or janitors, they can easily do all that lawyers do, and will soon make human lawyers redundant. However, when IBM Vice President Bob Moffat was himself on trial in the Manhattan Federal Court, accused of the largest hedge fund insider-trading in history, he inexplicably reposed all his hopes in one of those old-time human defence attorneys. A robot lawyer may have saved him from being found guilty of two counts of conspiracy and fraud, but when push came to shove, the IBM VP knew as well as the rest of us that the phrase ‘ethical robots’ is a contradiction in terms.

The article is here.

Monday, April 24, 2017

How Flawed Science Is Undermining Good Medicine

Morning Edition
NPR.org
Originally posted April 6, 2017

Here is an excerpt:

A surprising medical finding caught the eye of NPR's veteran science correspondent Richard Harris in 2014. A scientist from the drug company Amgen had reviewed the results of 53 studies that were originally thought to be highly promising — findings likely to lead to important new drugs. But when the Amgen scientist tried to replicate those promising results, in most cases he couldn't.

"He tried to reproduce them all," Harris tells Morning Edition host David Greene. "And of those 53, he found he could only reproduce six."

That was "a real eye-opener," says Harris, whose new book Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions explores the ways even some talented scientists go wrong — pushed by tight funding, competition and other constraints to move too quickly and sloppily to produce useful results.

"A lot of what everybody has reported about medical research in the last few years is actually wrong," Harris says. "It seemed right at the time but has not stood up to the test of time."

The impact of weak biomedical research can be especially devastating, Harris learned, as he talked to doctors and patients. And some prominent scientists he interviewed told him they agree that it's time to recognize the dysfunction in the system and fix it.

The article is here.

Scientists Hack a Human Cell and Reprogram it Like a Computer

Sophia Chen
Wired Magazine
Originally published March 27, 2017

CELLS ARE BASICALLY tiny computers: They send and receive inputs and output accordingly. If you chug a Frappuccino, your blood sugar spikes, and your pancreatic cells get the message. Output: more insulin.

But cellular computing is more than just a convenient metaphor. In the last couple of decades, biologists have been working to hack the cells’ algorithm in an effort to control their processes. They’ve upended nature’s role as life’s software engineer, incrementally editing a cell’s algorithm—its DNA—over generations. In a paper published today in Nature Biotechnology, researchers programmed human cells to obey 109 different sets of logical instructions. With further development, this could lead to cells capable of responding to specific directions or environmental cues in order to fight disease or manufacture important chemicals.

Their cells execute these instructions by using proteins called DNA recombinases, which cut, reshuffle, or fuse segments of DNA. These proteins recognize and target specific positions on a DNA strand—and the researchers figured out how to trigger their activity. Depending on whether the recombinase gets triggered, the cell may or may not produce the protein encoded in the DNA segment.

The article is here.

Sunday, April 23, 2017

Moral injury in U.S. combat veterans: Results from the national health and resilience in veterans study

Blair E. Wisco Ph.D., Brian P. Marx Ph.D., Casey L. May B.S., Brenda Martini M.A., and others
Depression and Anxiety

Abstract

Background

Combat exposure is associated with increased risk of mental disorders and suicidality. Moral injury, or persistent effects of perpetrating or witnessing acts that violate one's moral code, may contribute to mental health problems following military service. The pervasiveness of potentially morally injurious events (PMIEs) among U.S. combat veterans, and what factors are associated with PMIEs in this population remains unknown.
Methods

Data were analyzed from the National Health and Resilience in Veterans Study (NHRVS), a contemporary and nationally representative survey of a population-based sample of U.S. veterans, including 564 combat veterans, collected September–October 2013. Types of PMIEs (transgressions by self, transgressions by others, and betrayal) were assessed using the Moral Injury Events Scale. Psychiatric and functional outcomes were assessed using established measures.
Results

A total of 10.8% of combat veterans acknowledged transgressions by self, 25.5% endorsed transgressions by others, and 25.5% endorsed betrayal. PMIEs were moderately positively associated with combat severity (β = .23, P < .001) and negatively associated with white race, college education, and higher income (βs = .11–.16, Ps < .05). Transgressions by self were associated with current mental disorders (OR = 1.65, P < .001) and suicidal ideation (OR = 1.67, P < .001); betrayal was associated with postdeployment suicide attempts (OR = 1.99, P < .05), even after conservative adjustment for covariates, including combat severity.
Conclusions

A significant minority of U.S combat veterans report PMIEs related to their military service. PMIEs are associated with risk for mental disorders and suicidality, even after adjustment for sociodemographic variables, trauma and combat exposure histories, and past psychiatric disorders.

The article is here.

Saturday, April 22, 2017

As Trump Inquiries Flood Ethics Office, Director Looks To House For Action

By Marilyn Geewax and Peter Overby
npr.org
Originally published April 17, 2017

Office of Government Ethics Director Walter Shaub Jr. is calling on the chairman of House Oversight Committee to become more engaged in overseeing ethics questions in the Trump administration.

In an interview with NPR on Monday, Shaub said public inquiries and complaints involving Trump administration conflicts of interest and ethics have been inundating his tiny agency, which has only advisory power.

"We've even had a couple days where the volume was so huge it filled up the voicemail box, and we couldn't clear the calls as fast as they were coming in," Shaub said. His office is scrambling to keep pace with the workload.

But while citizens, journalists and Democratic lawmakers are pushing for investigations, Shaub suggested a similar level of energy is not coming from the House Oversight Committee, which has the power to investigate ethics questions, particularly those being raised now about reported secret ethics waivers for former lobbyists serving in the Trump administration.

The article is here.

Friday, April 21, 2017

Facebook plans ethics board to monitor its brain-computer interface work

Josh Constine
Tech Crunch
Originally posted April 19, 2017

Facebook will assemble an independent Ethical, Legal and Social Implications (ELSI) panel to oversee its development of a direct brain-to-computer typing interface it previewed today at its F8 conference. Facebook’s R&D department Building 8’s head Regina Dugan tells TechCrunch, “It’s early days . . . we’re in the process of forming it right now.”

Meanwhile, much of the work on the brain interface is being conducted by Facebook’s university research partners like UC Berkeley and Johns Hopkins. Facebook’s technical lead on the project, Mark Chevillet, says, “They’re all held to the same standards as the NIH or other government bodies funding their work, so they already are working with institutional review boards at these universities that are ensuring that those standards are met.” Institutional review boards ensure test subjects aren’t being abused and research is being done as safely as possible.

The article is here.

Individuals at High Risk for Suicide Are Categorically Distinct From Those at Low Risk.

Tracy K. Witte, Jill M. Holm-Denoma, Kelly L. Zuromski, Jami M. Gauthier, & John Ruscio
Psychological Assessment, Vol 29(4), Apr 2017, 382-393

Abstract

Although suicide risk is often thought of as existing on a graded continuum, its latent structure (i.e., whether it is categorical or dimensional) has not been empirically determined. Knowledge about the latent structure of suicide risk holds implications for suicide risk assessments, targeted suicide interventions, and suicide research. Our objectives were to determine whether suicide risk can best be understood as a categorical (i.e., taxonic) or dimensional entity, and to validate the nature of any obtained taxon. We conducted taxometric analyses of cross-sectional, baseline data from 16 independent studies funded by the Military Suicide Research Consortium. Participants (N = 1,773) primarily consisted of military personnel, and most had a history of suicidal behavior. The Comparison Curve Fit Index values for MAMBAC (.85), MAXEIG (.77), and L-Mode (.62) all strongly supported categorical (i.e., taxonic) structure for suicide risk. Follow-up analyses comparing the taxon and complement groups revealed substantially larger effect sizes for the variables most conceptually similar to suicide risk compared with variables indicating general distress. Pending replication and establishment of the predictive validity of the taxon, our results suggest the need for a fundamental shift in suicide risk assessment, treatment, and research. Specifically, suicide risk assessments could be shortened without sacrificing validity, the most potent suicide interventions could be allocated to individuals in the high-risk group, and research should generally be conducted on individuals in the high-risk group.

The article is here.

Thursday, April 20, 2017

Victims, vectors and villains: are those who opt out of vaccination morally responsible for the deaths of others?

Euzebiusz Jamrozik, Toby Handfield, Michael J Selgelid
Journal of Medical Ethics 2016;42:762-768.

Abstract

Mass vaccination has been a successful public health strategy for many contagious diseases. The immunity of the vaccinated also protects others who cannot be safely or effectively vaccinated—including infants and the immunosuppressed. When vaccination rates fall, diseases like measles can rapidly resurge in a population. Those who cannot be vaccinated for medical reasons are at the highest risk of severe disease and death. They thus may bear the burden of others' freedom to opt out of vaccination. It is often asked whether it is legitimate for states to adopt and enforce mandatory universal vaccination. Yet this neglects a related question: are those who opt out, where it is permitted, morally responsible when others are harmed or die as a result of their decision? In this article, we argue that individuals who opt out of vaccination are morally responsible for resultant harms to others. Using measles as our main example, we demonstrate the ways in which opting out of vaccination can result in a significant risk of harm and death to others, especially infants and the immunosuppressed. We argue that imposing these risks without good justification is blameworthy and examine ways of reaching a coherent understanding of individual moral responsibility for harms in the context of the collective action required for disease transmission. Finally, we consider several objections to this view, provide counterarguments and suggest morally permissible alternatives to mandatory universal vaccination including controlled infection, self-imposed social isolation and financial penalties for refusal to vaccinate.

The article is here.

White House may have violated its own ethics rules with Trump's executive-branch hires

Sonam Sheth
Business Insider
Originally published April 16, 2017

The Trump administration may be entangling itself in another ethical landmine.

In this case, the White House could have violated its own ethics rules with at least two hires, a New York Times and ProPublica investigation found.

One potential conflict involves Michael Catanzaro, who is the White House's top energy adviser. Until last year, The Times and ProPublica found, Catanzaro was working as a lobbyist for the fossil-fuel industry and had clients like Devon Energy of Oklahoma and Talen Energy of Pennsylvania.

Those two companies were stalwart opponents of President Barack Obama's environmental regulations, like the Clean Power Plan, which sought to promote the use of alternative energy sources. Trump signed an executive order undoing the plan in March. As the White House's top energy adviser, Catanzaro will handle many of those same issues.

The article is here.

Wednesday, April 19, 2017

Should Mental Disorders Be a Basis for Physician-Assisted Death?

Paul S. Appelbaum
Psychiatric Services
Volume 68, Issue 4, April 01, 2017, pp. 315-317

Abstract

Laws permitting physician-assisted death in the United States currently are limited to terminal conditions. Canada is considering whether to extend the practice to encompass intractable suffering caused by mental disorders, and the question inevitably will arise in the United States. Among the problems seen in countries that have legalized assisted death for mental disorders are difficulties in assessing the disorder’s intractability and the patient’s decisional competence, and the disproportionate involvement of patients with social isolation and personality disorders. Legitimate concern exists that assisted death could serve as a substitute for creating adequate systems of mental health treatment and social support.

The article is here.

Should healthcare professionals breach confidentiality when a patient is unfit to drive?

Daniel Sokol
The British Medical Journal
2017;356:j1505

Here are two excerpts:

The General Medical Council (GMC) has guidance on reporting concerns to the Driver and Vehicle Licensing Agency (DVLA). Doctors should explain to patients deemed unfit to drive that their condition may affect their ability to drive and that they—the patients—have a legal obligation to inform the DVLA about their condition.

(cut)

The trouble with this approach is that it relies on patients’ honesty. As far back as Hippocratic times, doctors were instructed to look out for the lies of patients. Two and a half thousand years later the advice still holds true. In a 1994 study on 754 adult patients, Burgoon and colleagues found that 85% admitted to concealing information from their doctors, and over a third said that they had lied outright. Many patients will lie to avoid the loss of their driving licence. They will falsely promise to inform the DVLA and to stop driving. And the chances of the doctor discovering that the patient is continuing to drive are slim.

The article is here.

Tuesday, April 18, 2017

Why Psychiatry Should Discard The Idea of Free Will


Steve Stankevicius
The Skeptical Shrink
Originally posted March 30, 2017

Here is an excerpt:

Neuroscience has continued to pile on the evidence that our thoughts are entirely dependent on the physical processes of the brain, whilst evidence for ‘something else’ is entirely absent. Despite this, mind-body dualism has endured as the predominant view to this day and the belief in free will is playing a crucial role. Free will would only make sense if we invoke at least some magical aspect of the mind. It would only make sense if we relinquish the mind from the bonds of the physical laws of the universe. It would only make sense if we imagine the mind as somewhat irrespective of the brain.

It is not surprising then that psychiatry, a medicine of the mind, is not seen as ‘real medicine’. Only 4% of medical graduates in the US apply for psychiatry, and in the UK psychiatry has the least applicants per vacancy of any specialty, about one applicant per vacancy (compared with over nine per vacancy in surgery). Psychiatry is seen as practise of the dark arts, accompanied by mind reading, talking to the dead, and fortune telling. It seems psychiatry deals with metaphysics, yet science is not in the game of metaphysics.

If psychiatry is medicine of the mind, but our common beliefs about the mind are wrong, where does that leave the medicine? In my view, free will is forcing a gap in our picture between physical processes and the mind. This gap forms a trash can where we throw all cases of mental illness we don’t yet understand. Does it seem like a trash can? No, because we feel comfortable in thinking “the mind is mysterious, there’s free will involved”. But if we resign ourselves to accept a mind with free will - a mind that is free - we resign ourselves to a psychiatric specialty that does not attempt to fully understand the underpinnings of mental illness.

The blog post is here.

‘Your animal life is over. Machine life has begun.’

Mark O'Connell
The Guardian
Originally published March 25, 2017

Here is an excerpt:

The relevant science for whole brain emulation is, as you’d expect, hideously complicated, and its interpretation deeply ambiguous, but if I can risk a gross oversimplification here, I will say that it is possible to conceive of the idea as something like this: first, you scan the pertinent information in a person’s brain – the neurons, the endlessly ramifying connections between them, the information-processing activity of which consciousness is seen as a byproduct – through whatever technology, or combination of technologies, becomes feasible first (nanobots, electron microscopy, etc). That scan then becomes a blueprint for the reconstruction of the subject brain’s neural networks, which is then converted into a computational model. Finally, you emulate all of this on a third-party non-flesh-based substrate: some kind of supercomputer or a humanoid machine designed to reproduce and extend the experience of embodiment – something, perhaps, like Natasha Vita-More’s Primo Posthuman.

The whole point of substrate independence, as Koene pointed out to me whenever I asked him what it would be like to exist outside of a human body, – and I asked him many times, in various ways – was that it would be like no one thing, because there would be no one substrate, no one medium of being. This was the concept transhumanists referred to as “morphological freedom” – the liberty to take any bodily form technology permits.

“You can be anything you like,” as an article about uploading in Extropy magazine put it in the mid-90s. “You can be big or small; you can be lighter than air and fly; you can teleport and walk through walls. You can be a lion or an antelope, a frog or a fly, a tree, a pool, the coat of paint on a ceiling.”

The article is here.

Monday, April 17, 2017

The Moral Failure of Crowdfunding Health Care

Jonathan Hiskes
medium.com
Originally posted April 3, 2017

Here is an excerpt:

The most dangerous consequence of the rise of medical crowdfunding, they argue, is the way it trains us to see health care as a personal good to be earned, rather than a universal human right. Other forums, like a public town hall, could provide room for debate on whether we want this state of affairs in our country. The format of GoFundMe steers users toward “hyper-individualized accounts of suffering.”

“Relying on these sites changes how we perceive the problem,” said Kenworthy. “It masks a more open conversation we could be having about the inequities of our health system. There’s no space for a structural critique in your personal appeal.”

In this way, crowdfunding functions as both a symptom and a cause of a health care system designed for austerity.

The article is here.

Who Oversees The President's Ethics?

Alina Selyukh and Lucia Maffei
Maine Public
Originally published March 27, 2017

President Trump continues to own hundreds of businesses around the world, and he has staffed his administration with wealthy people who have ties to a complex web of companies. Those financial entanglements have prompted government ethics experts to raise concerns about conflicts of interest.

They are worried that this president is violating the U.S. Constitution's Emoluments Clause, which bars elected officials from benefiting from foreign governments. Also, in various legal filings and lawsuits, they have raised questions about whether the financial interests of the president and his appointees may be influencing public policy.

As NPR and other media outlets continue to cover these concerns and conflicts of interest, a question frequently arises: Who oversees the ethics of the president and other high-ranking officials? Who has the power to investigate or enforce ethics rules and laws?

The answer can be as entangled as the government bureaucracies involved. Of course, the media, whistleblowers and the courts are key elements of the accountability ecosystem. A number of agencies or government bodies also have a hand in holding presidents and appointees accountable on ethics and conflicts of interest. But a few play an outsize role — though only some of them have direct purview over the activities of the president.

Below is a reference sheet.

The article is here.

Sunday, April 16, 2017

Yuval Harari on why humans won’t dominate Earth in 300 years

Interview by Ezra Klein
Vox.com
Originally posted March 27, 2017

Here are two excerpts:

I totally agree that for success, cooperation is usually more important than just raw intelligence. But the thing is that AI will be far more cooperative, at least potentially, than humans. To take a famous example, everybody is now talking about self-driving cars. The huge advantage of a self-driving car over a human driver is not just that, as an individual vehicle, the self-driving car is likely to be safer, cheaper, and more efficient than a human-driven car. The really big advantage is that self-driving cars can all be connected to one another to form a single network in a way you cannot do with human drivers.

It's the same with many other fields. If you think about medicine, today you have millions of human doctors and very often you have miscommunication between different doctors, but if you switch to AI doctors, you don't really have millions of different doctors. You have a single medical network that monitors the health of everybody in the world.

(cut)

I think the other problem with AI taking over is not the economic problem, but really the problem of meaning — if you don't have a job anymore and, say, the government provides you with universal basic income or something, the big problem is how do you find meaning in life? What do you do all day?

Here, the best answers so far we've got is drugs and computer games. People will regulate more and more their moods with all kinds of biochemicals, and they will engage more and more with three-dimensional virtual realities.

The entire interview is here.

Saturday, April 15, 2017

Devin Nunes and the Ethics Watchdogs

Ryan Lizza
The New Yorker
Originally posted April 11, 2017

Here is an excerpt:

Before taking office, Trump ignored the advice of the federal Office of Government Ethics, which publicly pressed him to operate under the same rules required for his Cabinet members and to fully divest from his business interests. As a result, a key question of the Trump era is whether he might be in violation of the Emoluments Clause of the Constitution, which prohibits officials from receiving gifts from foreign states, when, for instance, foreign diplomats pay for rooms at Trump hotels. In January, crew filed a lawsuit over the emoluments issue, though several legal scholars have noted that the group may have a tough time making the case that it has standing to sue Trump.

Eisen, who is unfailingly optimistic, disagrees. “I think that the public awareness of this unethical governing environment has something to do with Trump’s mid-thirties approval ratings,” he said. “I think that our emoluments case is going to be the most impactful of all.”

The article is here.

GAO to Review Ethics, Funding of Trump Transition

Eric Katz
Government Executive
Originally published April 11, 2017

The federal government’s internal auditor is commencing a review of President Trump’s transition into office, examining potential conflicts of interest, contacts with foreign government and funding sources.

The Government Accountability Office is launching the review after Sen. Elizabeth Warren, D-Mass., and Rep. Elijah Cummings, D-Md., requested it. GAO will determine how the General Services Administration managed the transition and its funding mechanisms, what Trump’s transition spent those funds on and how much private funding it collected. The auditors will also probe the Office of Government Ethics to evaluate what information it made available to the transition team and to what extent Trump’s associates took advantage of OGE’s offerings versus the previous two transitions.

The article is here.

Friday, April 14, 2017

The moral bioenhancement of psychopaths

Elvio Baccarini and Luca Malatesti
The Journal of Medical Ethics
http://dx.doi.org/10.1136/medethics-2016-103537

Abstract

We argue that the mandatory moral bioenhancement of psychopaths is justified as a prescription of social morality. Moral bioenhancement is legitimate when it is justified on the basis of the reasons of the recipients. Psychopaths expect and prefer that the agents with whom they interact do not have certain psychopathic traits. Particularly, they have reasons to require the moral bioenhancement of psychopaths with whom they must cooperate. By adopting a public reason and a Kantian argument, we conclude that we can justify to a psychopath being the recipient of mandatory moral bioenhancement because he has a reason to require the application of this prescription to other psychopaths.

Ethical Guidelines on Lab-Grown Embryos Beg for Revamping

Karen Weintraub
Scientific American
Originally posted on March 21, 2017

For nearly 40 years scientists have observed their self-imposed ban on doing research on human embryos in the lab beyond the first two weeks after fertilization. Their initial reasoning was somewhat arbitrary: 14 days is when a band of cells known as a primitive streak, which will ultimately give rise to adult tissues, forms in an embryo. It is also roughly the last time a human embryo can divide and create more than one person, and a few days before the nervous system begins to develop. But the so-called 14-day rule has held up all this time partly because scientists could not get an embryo to grow that long outside its mother's body.

Researchers in the U.K. and U.S. recently succeeded for the first time in growing embryos in the lab for nearly two weeks before terminating them, showing that the so-called 14-day rule is no longer a scientific limitation—although it remains a cultural one. Now, a group of Harvard University scientists has published a paper arguing that it is time to reconsider the 14-day rule because of advances in synthetic biology.

The U.S. has no law against growing embryos beyond two weeks—as long as the research is not funded with federal dollars. But most scientific journals will not publish studies that violate the 14-day rule, and the International Society for Stem Cell Research requires its members to agree to the rule in order to qualify for membership.

The article is here.

Thursday, April 13, 2017

Humans selectively edit reality before accepting it

Olivia Goldhill
Quartz
Originally published March 26, 2017

Knowledge is power, so the saying goes, which makes it all the more striking how determined humans are to avoid useful information. Research in psychology, economics, and sociology has, over the course of several decades, highlighted countless examples of cases where humans are apt to ignore information. A review of these earlier studies by Carnegie Mellon University researchers, published this month in the Journal of Economic Literature, shows the extent to which humans avoid information and so selectively edit their own reality.

Rather than highlighting all the myriad ways humans fail to proactively seek out useful information, the paper’s authors focus on active information avoidance: Cases where individuals know information is available and have free access to that information, yet choose not to consider it. Examples of this phenomenon, revealed by the previous studies, include investors not looking at their financial portfolios when the stock market is down; patients taking STD tests and then failing to obtain the results; professionals refusing to look at their colleagues’ feedback on their work; and even the propensity of wealthy people to avoid poor neighborhoods so they don’t feel awareness of and guilt over their own privilege.

The article is here.

Identity change and informed consent

Karsten Witt
Journal of Medical Ethics
Published Online First: 20 March 2017.
doi: 10.1136/medethics-2016-103684

Abstract

In this paper, I focus on a kind of medical intervention that is at the same time fascinating and disturbing: identity-changing interventions. My guiding question is how such interventions can be ethically justified within the bounds of contemporary bioethical mainstream that places great weight on the patient's informed consent. The answer that is standardly given today is that patients should be informed about the identity effects, thus suggesting that changes in identity can be treated like ‘normal’ side effects. In the paper, I argue that this approach is seriously lacking because it misses important complexities going along with decisions involving identity changes and consequently runs into mistakes. As a remedy I propose a new approach, the ‘perspective-sensitive account’, which avoids these mistakes and thus provides the conceptual resources to systematically reflect on and give a valid consent to identity-changing interventions.

The article is here.

Editor's note: While this article deals with medical interventions, such as Deep Brain Stimulation, the similar concerns might be generalized to psychotherapy and/or psychopharmacology.

Wednesday, April 12, 2017

National Corruption Breeds Personal Dishonesty

Simon Makin
Scientific American
Originally published on March 1, 2017

Here is an excerpt:

A number of studies have shown that seeing a peer behave unethically increases people's dishonesty in laboratory tests. What is much harder to investigate is how this kind of influence operates at a societal level. But that is exactly what behavioral economists Simon Gächter of the University of Nottingham in England and Jonathan Schulz of Yale University set out to do in a study published in March 2016 in Nature. Their findings suggest that corruption not only harms a nation's prosperity but also shapes the moral behavior of its citizens. The results have implications for interventions aimed at tackling corruption.

The researchers developed a measure of corruption by combining three widely used metrics that capture levels of political fraud, tax evasion and corruption in a given country. “We wanted to get a really broad index, including many different aspects of rule violations,” Schulz says. They then conducted an experiment involving 2,568 participants from 23 nations. Participants were asked to roll a die twice and report the outcome of only the first roll. They received a sum of money proportional to the number reported but got nothing for rolling a six. Nobody else saw the die, so participants were free to lie about the outcome.

The article is here.

Why People Continue to Believe Objectively False Things

Amanda Taub and Brendan Nyhan
New York Times - The Upshot
Originally posted March 22, 2017

Here is an excerpt:

Even when myths are dispelled, their effects linger. The Boston College political scientist Emily Thorson conducted a series of studies showing that exposure to a news article containing a damaging allegation about a fictional political candidate caused people to rate the candidate more negatively even when the allegation was corrected and people believed it to be false.

There are ways to correct information more effectively. Adam Berinsky of M.I.T., for instance, found that a surprising co-partisan source (a Republican member of Congress) was the most effective in reducing belief in the “death panel” myth about the Affordable Care Act.

But in the wiretapping case, Republican lawmakers have neither supported Mr. Trump’s wiretap claims (which could risk their credibility) nor strenuously opposed them (which could prompt a partisan backlash). Instead, they have tried to shift attention to a different political narrative — one that suits the partisan divide by making Mr. Obama the villain of the piece. Rather than focusing on the wiretap allegation, they have sought to portray the House Intelligence Committee hearings on Russian interference in the election as an investigation into leaks of classified information.

The article is here.

Tuesday, April 11, 2017

Welcker v. Georgia Board of Examiners of Psychologists

Legal Decision

Synopsis: Georgia State Board of Psychology is permitted to deny a license to an applicant, when the applicant's doctoral program does not meet the residency requirement, and, without substantial hardship.

Here are two excerpts:

Neither the Board's decision to deny Welcker a license nor their denial of her petition for waiver can be considered a contested case. Georgia law allows the denial of a license without a hearing where an applicant fails to show that she has met all the qualifications for that license. OCGA § 43-1-19 (a). Therefore, because no hearing was required by law before the denial of Welcker's license, the Board's denial of Welcker's license application does not present a contested case subject to judicial review.

The Board's decision to deny a petition for waiver also cannot be considered a contested case. OCGA § 43-1-19 (j) explicitly states that the "refusal to issue a previously denied license" shall not be considered a contested case under the Administrative Procedure Act and "notice and hearing with the meaning of the [Act] shall not be required"; however, the applicant "shall be allowed to appear before the board if he or she so requests." Nevertheless, such rulings are expressly made subject to judicial review under OCGA § 50-13-9.1 (f), which provides that "[t]he agency's decision to deny a petition for variance or waiver shall be subject to judicial review in accordance with Code Section 50-13-19."

(cut)

The Board denied Welcker's petition for waiver on two grounds: (1) her failure to meet the appropriate residency requirements "as per the Board rules in effect in 2007" and (2) her failure to prove a substantial hardship resulting from strict application of the rule.

The ruling is here.

The Associations between Ethical Organizational Culture,Burnout, and Engagement: A Multilevel Study

Mari Huhtala, Asko Tolvanen, Saija Mauno, and Taru Feldt
J Bus Psychol
DOI 10.1007/s10869-014-9369-2

Abstract/Purpose

Ethical culture is a specific form of organizational culture (including values and systems that can promote ethical behavior), and as such a socially constructed phenomenon. However, no previous studies have investigated the degree to which employees’ perceptions of their organization’s ethical culture are shared within work units (departments), which was the first aim of this study. In addition, we studied the associations between ethical culture and occupational well-being (i.e., burnout and work engagement) at both the individual and work-unit levels.

Design/Methodology/Approach

The questionnaire data were gathered from 2,146 respondents with various occupations in 245 different work units in one public sector organization. Ethical organizational culture was measured with the corporate ethical virtues scale, including eight sub-dimensions.

Findings

Multilevel structural equation modeling showed that 12–27 % of the total variance regarding the dimensions of ethical culture was explained by departmental homogeneity (shared experiences). At both the within and between levels, higher perceptions of ethical culture associated with lower burnout and higher work engagement.

Implications

The results suggest that organizations should support ethical practices at the work-unit level, to enhance work engagement, and should also pay special attention to work units with a low ethical culture because these work environments can expose employees to burnout.

Originality/Value

This is one of the first studies to find evidence of an association between shared experiences of ethical culture and collective feelings of both burnout and work engagement.

A copy of the article is here.

Monday, April 10, 2017

A Scholarly Sting Operation Shines a Light on ‘Predatory’ Journals

Gina Kolata
The New York Times
Originally posted March 22, 2017

Here is an excerpt:

Yet, when Dr. Fraud applied to 360 randomly selected open-access academic journals asking to be an editor, 48 accepted her and four made her editor in chief. She got two offers to start a new journal and be its editor. One journal sent her an email saying, “It’s our pleasure to add your name as our editor in chief for the journal with no responsibilities.”

Little did they know that they had fallen for a sting, plotted and carried out by a group of researchers who wanted to draw attention to and systematically document the seamy side of open-access publishing. While those types of journals began with earnest aspirations to make scientific papers available to everyone, their proliferation has had unintended consequences.

Traditional journals typically are supported by subscribers who pay a fee while authors pay nothing to be published. Nonsubscribers can only read papers if they pay the journal for each one they want to see.

Open-access journals reverse that model. The authors pay and the published papers are free to anyone who cares to read them.

Publishing in an open-access journal can be expensive — the highly regarded Public Library of Science (PLOS) journals charge from $1,495 to $2,900 to publish a paper, with the fee dependent on which of its journals accepts the paper.

Not everyone anticipated what would happen next, or to what extent it would happen.

The article is here.

Citigroup Has an On-call Ethicist to Help It Solve Moral Issues

Alana Abramson
Fortune Magazine
Originally posted March 17, 2017

It turns out that Citigroup has an on-call ethicist to handle issues around the intersection of banking, finance, and morality.

The bank has worked with Princeton University Professor David Miller for the past three years, according to the Wall Street Journal. His role includes providing advice to top executives and reviewing topics and projects they have concerns about.

Miller was brought on, according to the Journal, by Citigroup CEO Michael Corbat, who felt the role was necessary after learning about employees' hesitations to voice concerns about wrongdoings, and public perceptions of banks.

The article is here.

Sunday, April 9, 2017

Are You Creeped Out by the Idea of a “Moral Enhancement” Pill?

Vanessa Rampton
Slate.com
Originally posted March 20, 2017

Here is an excerpt:

In its broad outlines, the idea of moral bioenhancement is as follows: Once we understand the biological and genetic influences on moral decision-making and judgments, we can enhance (read: improve) them with drugs, surgery, or other devices. A “morality pill” could shore up self-control, empathy, benevolence, and other desirable characteristics while discouraging tendencies toward violent aggression or racism. As a result, people might be kinder to their families, better members of their communities, and better able to address some of the world’s biggest problems such as global inequality, environmental destruction, and war.

In fact, the attempts of parents, educators, friends, philosophers, and therapists to make people behave better are already getting a boost from biology and technology. Recent studies have shown that neurological and genetic characteristics influence moral decision-making in more or less subtle ways. Some behaviors, like violent aggression, drug abuse and addiction, and the likelihood of committing a crime have been linked to genetic variables as well as specific brain chemicals such as dopamine. Conversely, evidence suggests that our ability to be empathetic, our tolerance of other racial groups, and our sensitivity to fairness all have their roots in biology. Assuming cutting-edge developments in neuroscience and genetics have been touted as able to crack the morality code, the search for a morality pill will only continue apace.

The article is here.

Saturday, April 8, 2017

Evangelicals Are Aiding and Abetting the Deconstruction of Morality

Marvin Thompson
christianpost.com
Originally posted April 3, 2017

The LA Times Editorial on April 2, 2017, described Trump and his tactics during the presidential primaries and election as “…a narcissist and a demagogue who used fear and dishonesty to appeal to the worst in American voters.” This is not merely the judgement of a disappointed liberal media unable to come to terms with a devastating election loss, and is now inveighing against the President with unwarranted charges and innuendos. If the Church, and especially evangelical churches, take that view and ignore the many conservative voices, including many within the evangelical community and other Christians not self-identified as evangelicals, then our problems run much deeper that we think.

No, the LA Times’ conclusion is a commentary on the degradation of evangelical morality. Notice, the worst in American voters. Now, pause and let that sink in.

Do you get it? Do you see the gravity of the situation for a community that professes to stand on the infallible truth of the Gospel and on immutable biblical principles? No? Then consider that it is the evangelical vote that carried Trump through the primaries and over the top in the election. Do you see it now?

Not yet? Then consider, further, what we know about Trump, about his lack of a moral compass and his unabashed embrace of it; his disrespect of others, be they male or female or disabled; his willful mendacity; his contempt for God, despite what Paula White, Dobson, et al claim about his so-called conversion (where is the evidence, as evangelicals like to ask?); his catalyzing effect on the worst racist elements of society; his promotion of hatred and violence; his utter lack of empathy for the poor and less fortunate. Nothing has changed since his election as President. Except, he now has the power to propagate his warped morality. This power was given to him by the evangelicals. Does that make it any clearer?

The blog post is here.

Friday, April 7, 2017

Informed Patient? Don’t bet on it

Mikkael Sekeres and Timothy Gilligan
The New York Times
Originally posted March 1, 2017

Here is an excerpt:

The secret is that informed consent in health care is commonly not-so-well informed. It might be a document we ask you to sign, at the behest of our lawyers, in case we end up in court if a bad outcome happens. Unfortunately, it’s often not really about informing you. In schools, teachers determine what students know through tests and homework. The standard is not whether the teacher has explained how to add, but instead whether the student can add. If we were truly invested in whether you were informed, we’d give you a quiz, or at least ask you to repeat back to us what you heard so we could assess its accuracy.

The article is here.

Against Willpower

Carl Erik Fisher
Nautilus
Originally published February 2, 2017

Here is an excerpt:

These hidden dimensions of willpower call into question the whole scholarly conception of the term, and put us into a lose-lose situation. Either our definition of willpower is narrowed and simplified to the point of uselessness (in both research and casual contexts), or it is allowed to continue as an imprecise term, standing in for an inconsistent hodgepodge of various mental functions. Willpower may simply be a pre-scientific idea—one that was born from social attitudes and philosophical speculation rather than research, and enshrined before rigorous experimental evaluation of it became possible. The term has persisted into modern psychology because it has a strong intuitive hold on our imagination: Seeing willpower as a muscle-like force does seem to match up with some limited examples, such as resisting cravings, and the analogy is reinforced by social expectations stretching back to Victorian moralizing. But these ideas also have a pernicious effect, distracting us from more accurate ways of understanding human psychology and even detracting from our efforts toward meaningful self-control. The best way forward may be to let go of “willpower” altogether.

Doing so would rid us of some considerable moral baggage. Notions of willpower are easily stigmatizing: It becomes OK to dismantle social safety nets if poverty is a problem of financial discipline, or if health is one of personal discipline. An extreme example is the punitive approach of our endless drug war, which dismisses substance use problems as primarily the result of individual choices. Unhealthy moralizing creeps into the most quotidian corners of society, too. When the United States started to get concerned about litter in the 1950s, the American Can Company and other corporations financed a “Keep America Beautiful” campaign to divert attention from the fact that they were manufacturing enormous quantities of cheap, disposable, and profitable packaging, putting the blame instead on individuals for being litterbugs. Willpower-based moral accusations are among the easiest to sling.

The article is here.

Thursday, April 6, 2017

Would You Deliver an Electric Shock in 2015?

Dariusz Doliński, Tomasz Grzyb, Tomasz Grzyb and others
Social Psychological and Personality Science
First Published January 1, 2017

Abstract

In spite of the over 50 years which have passed since the original experiments conducted by Stanley Milgram on obedience, these experiments are still considered a turning point in our thinking about the role of the situation in human behavior. While ethical considerations prevent a full replication of the experiments from being prepared, a certain picture of the level of obedience of participants can be drawn using the procedure proposed by Burger. In our experiment, we have expanded it by controlling for the sex of participants and of the learner. The results achieved show a level of participants’ obedience toward instructions similarly high to that of the original Milgram studies. Results regarding the influence of the sex of participants and of the “learner,” as well as of personality characteristics, do not allow us to unequivocally accept or reject the hypotheses offered.

The article is here.

“After 50 years, it appears nothing has changed,” said social psychologist Tomasz Grzyb, an author of the new study, which appeared this week in the journal Social Psychological and Personality Science.

A Los Angeles Times article summaries the study here.

How to Upgrade Judges with Machine Learning

by Tom Simonite
MIT Press
Originally posted March 6, 2017

Here is an excerpt:

The algorithm assigns defendants a risk score based on data pulled from records for their current case and their rap sheet, for example the offense they are suspected of, when and where they were arrested, and numbers and type of prior convictions. (The only demographic data it uses is age—not race.)

Kleinberg suggests that algorithms could be deployed to help judges without major disruption to the way they currently work in the form of a warning system that flags decisions highly likely to be wrong. Analysis of judges’ performance suggested they have a tendency to occasionally release people who are very likely to fail to show in court, or to commit crime while awaiting trial. An algorithm could catch many of those cases, says Kleinberg.

Richard Berk, a professor of criminology at the University of Pennsylvania, describes the study as “very good work,” and an example of a recent acceleration of interest in applying machine learning to improve criminal justice decisions. The idea has been explored for 20 years, but machine learning has become more powerful, and data to train it more available.

Berk recently tested a system with the Pennsylvania State Parole Board that advises on the risk a person will reoffend, and found evidence it reduced crime. The NBER study is important because it looks at how machine learning can be used pre-sentencing, an area that hasn’t been thoroughly explored, he says.

The article is here.

Editor's Note: I often wonder how much time until machine learning is applied to psychotherapy.

Wednesday, April 5, 2017

Canada passes genetic ‘anti-discrimination’ law

Xavier Symons
BioEdge
Originally published 10 March 2017

Canada’s House of Commons has passed a controversial new law that prevents corporations from demanding genetic information from potential employees or customers.

The law, known as ‘Bill S-201’, makes it illegal for companies to deny someone a job if they refuse a genetic test, and also prevents insurance companies from making new customer policies conditional on the supply of genetic information. Insurance companies will no longer be able to solicit genetic tests so as to determine customer premiums.

Critics of the bill said that insurance premiums would skyrocket, in some cases up to 30 or 50 per cent, if companies are prevented from obtaining genetic data. And Prime Minister Justin Trudeau labelled the proposed legislation “unconstitutional” as it impinges on what he believes should be a matter for individual provinces to regulate.

The article is here.

Root Out Bias from Your Decision-Making Process

Thomas C. Redman
Harvard Business Review
Originally posted March 10, 2017

Here is an excerpt:

Making good decisions involves hard work. Important decisions are made in the face of great uncertainty, and often under time pressure. The world is a complex place: People and organizations respond to any decision, working together or against one another, in ways that defy comprehension. There are too many factors to consider. There is rarely an abundance of relevant, trusted data that bears directly on the matter at hand. Quite the contrary — there are plenty of partially relevant facts from disparate sources — some of which can be trusted, some not — pointing in different directions.

With this backdrop, it is easy to see how one can fall into the trap of making the decision first and then finding the data to back it up later. It is so much faster. But faster is not the same as well-thought-out. Before you jump to a decision, you should ask yourself, “Should someone else who has time to assemble a complete picture make this decision?” If so, you should assign the decision to that person or team.

The article is here.

Tuesday, April 4, 2017

Two licensing boards, for psychologists and counselors, at impasse with governor over sexual orientation language

Nancy Hicks
Lincoln Journal Star  
Originally posted March 11, 2017

Two state licensing boards that oversee psychologists and mental health counselors have been at odds with two Nebraska governors and the Nebraska Catholic Conference for almost a decade over sexual orientation and gender identity language in their rules.

The two licensing boards -- the Board of Psychology and the Board of Mental Health Practice -- have been unable to update their rules because they have refused to compromise on these issues.

And it looks like that impasse will continue, after the administration of Gov. Pete Ricketss recently rejected both sets of rules and provided its own draft of acceptable language.

That proposed language -- which strips out antidiscrimination protection based on sexual orientation and gender identity -- “is completely unacceptable and egregious,” said Dr. Anne Talbot, president of the Nebraska Psychological Association, which represents psychologists across the state.

Her group will oppose the administration's proposed changes when the issue is before the state licensing board May 31.

The article is here.

Illusions in Reasoning

Sangeet S. Khemlani & P. N. Johnson-Laird
Minds & Machines
DOI 10.1007/s11023-017-9421-x

Abstract

Some philosophers argue that the principles of human reasoning are and that mistakes are no more than momentary lapses in ‘‘information processing."  This article makes a case to the contrary. It shows that human reasoners systematic fallacies. The theory of mental models predicts these
errors. It postulates that individuals construct mental models of the possibilities to the premises of an inference refer. But, their models usually represent what is in a possibility, not what is false. This procedure reduces the load on working and for the most part it yields valid inferences. However, as a computer implementing the theory revealed, it leads to fallacious conclusions for inferences—those for which it is crucial to represent what is false in a possibility.  Experiments demonstrate the variety of these fallacies and contrast them control problems, which reasoners tend to get right. The fallacies can be illusions, and they occur in reasoning based on sentential connectives as ‘‘if’’ and ‘‘or’’, quantifiers such as ‘‘all the artists’’ and ‘‘some of the artists’’, deontic relations such as ‘‘permitted’’ and ‘‘obligated’’, and causal relations such causes’’ and ‘‘allows’’. After we have reviewed the principal results, we consider potential for alternative accounts to explain these illusory inferences. And show how the illusions illuminate the nature of human rationality.

Find it here.

Monday, April 3, 2017

Conviction, persuasion and manipulation: the ethical dimension of epistemic vigilance

Johannes Mahr
Cognition and Culture Institute Blog
Originally posted 10 March 2017

In today’s political climate moral outrage about (alleged) propaganda and manipulation of public opinion dominate our discourse. Charges of manipulative information provision have arguably become the most widely used tool to discredit one’s political opponent. Of course, one reason for why such charges have become so prominent is that the way we consume information through online media has made us more vulnerable than ever to such manipulation. Take a recent story published by The Guardian, which describes the strategy of information dissemination allegedly used by the British ‘Leave Campaign’:
“The strategy involved harvesting data from people’s Facebook and other social media profiles and then using machine learning to ‘spread’ through their networks. Wigmore admitted the technology and the level of information it gathered from people was ‘creepy’. He said the campaign used this information, combined with artificial intelligence, to decide who to target with highly individualised advertisements and had built a database of more than a million people.”
This might not just strike you as “creepy” but as simply unethical just as it did one commentator cited in the article who called these tactics “extremely disturbing and quite sinister”. Here, I want to investigate where this intuition comes from.

The blog post is here.

Can Human Evolution Be Controlled?

William B. Hurlbut
Big Questions Online
Originally published February 17, 2017

Here is an excerpt:

These gene-editing techniques may transform our world as profoundly as many of the greatest scientific discoveries and technological innovations of the past — like electricity, synthetic chemistry, and nuclear physics. CRISPR/Cas9 could provide urgent and uncontroversial progress in biomedical science, agriculture, and environmental ecology. Indeed, the power and depth of operation of these new tools is delivering previously unimagined possibilities for reworking or redeploying natural biological processes — some with startling and disquieting implications. Proposals by serious and well-respected scientists include projects of broad ecological engineering, de-extinction of human ancestral species, a biotechnological “cure” for aging, and guided evolution of the human future.

The questions raised by such projects go beyond issues of individual rights and social responsibilities to considerations of the very source and significance of the natural world, its integrated and interdependent processes, and the way these provide the foundational frame for the physical, psychological, and spiritual meaning of human life.

The article is here.

Sunday, April 2, 2017

Presidential aide’s tweets violate law, ethics lawyers say

The Associated Press
Originally posted April 1, 2017

A top adviser to President Trump on Saturday urged the defeat of a Michigan congressman and member of a conservative group of U.S. House lawmakers who derailed the White House on legislation to repeal and replace the Obama-era health care law.

But the tweet by White House social media director Dan Scavino Jr. violated federal law that limits political activity by government employees, government ethics lawyers said.

The White House had no immediate comment.

The article is here.

The Problem of Evil: Crash Course Philosophy #13

Published on May 9, 2016

After weeks of exploring the existence of nature of god, today Hank explores one of the biggest problems in theism, and possibly the biggest philosophical question humanity faces: why is there evil?


Saturday, April 1, 2017

Does everyone have a price? On the role of payoff magnitude for ethical decision making

Benjamin E. Hilbig and Isabel Thielmann
Cognition
Volume 163, June 2017, Pages 15–25

Abstract

Most approaches to dishonest behavior emphasize the importance of corresponding payoffs, typically implying that dishonesty might increase with increasing incentives. However, prior evidence does not appear to confirm this intuition. However, extant findings are based on relatively small payoffs, the potential effects of which are solely analyzed across participants. In two experiments, we used different multi-trial die-rolling paradigms designed to investigate dishonesty at the individual level (i.e., within participants) and as a function of the payoffs at stake – implementing substantial incentives exceeding 100€. Results show that incentive sizes indeed matter for ethical decision making, though primarily for two subsets of “corruptible individuals” (who cheat more the more they are offered) and “small sinners” (who tend to cheat less as the potential payoffs increase). Others (“brazen liars”) are willing to cheat for practically any non-zero incentive whereas still others (“honest individuals”) do not cheat at all, even for large payoffs. By implication, the influence of payoff magnitude on ethical decision making is often obscured when analyzed across participants and with insufficiently tempting payoffs.

The article is here.

Bannon May Have Violated Ethics Pledge by Communicating With Breitbart

Lachlan Markay
Daily Beast
Originally published March 30, 2017

Here is an excerpt:

Bannon, Breitbart’s former chairman, has spoken directly to two of the company’s top editors since joining the White House. Trump’s predecessor publicly waived portions of the ethics pledge for similar communications, but the White House confirmed this week that it has not done so for Bannon.

“It seems to me to be a very clear violation,” Richard Painter, who was White House counsel for President George W. Bush, told The Daily Beast in an interview.

A White House spokesperson confirmed that every Trump appointee has signed the ethics pledge required by an executive order imposed by the president in January. No White House employees have received waivers to the pledge, the spokesperson added.

All incoming appointees are required to certify that they “will not for a period of 2 years from the date of my appointment participate in any particular matter involving specific parties that is directly and substantially related to my former employer or former clients.”

The article is here.

Friday, March 31, 2017

Dishonesty gets easier on the brain the more you do it

Neil Garrett
Aeon
Originally published March 7, 2017

Here are two excerpts:

These two ideas – the role of arousal on our willingness to cheat, and neural adaptation – are connected because the brain does not just adapt to things such as sounds and smells. The brain also adapts to emotions. For example, when presented with aversive pictures (eg, threatening faces) or receiving something unpleasant (eg, an electric shock), the brain will initially generate strong responses in regions associated with emotional processing. But when these experiences are repeated over time, the emotional responses diminish.

(cut)

There have also been a number of behavioural interventions proposed to curb unethical behaviour. These include using cues that emphasise morality and encouraging self-engagement. We don’t currently know the underlying neural mechanisms that can account for the positive behavioural changes these interventions drive. But an intriguing possibility is that they operate in part by shifting up our emotional reaction to situations in which dishonesty is an option, in turn helping us to resist the temptation to which we have become less resistant over time.

The article is here.

Signaling Emotion and Reason in Cooperation

Levine, Emma Edelman and Barasch, Alixandra and Rand, David G. and Berman, Jonathan Z. and Small, Deborah A. (February 23, 2017).

Abstract

We explore the signal value of emotion and reason in human cooperation. Across four experiments utilizing dyadic prisoner dilemma games, we establish three central results. First, individuals believe that a reliance on emotion signals that one will cooperate more so than a reliance on reason. Second, these beliefs are generally accurate — those who act based on emotion are more likely to cooperate than those who act based on reason. Third, individuals’ behavioral responses towards signals of emotion and reason depends on their own decision mode: those who rely on emotion tend to conditionally cooperate (that is, cooperate only when they believe that their partner has cooperated), whereas those who rely on reason tend to defect regardless of their partner’s signal. These findings shed light on how different decision processes, and lay theories about decision processes, facilitate and impede cooperation.

Available at SSRN: https://ssrn.com/abstract=2922765

Editor's note: This research has implications for developing the therapeutic relationship.

Thursday, March 30, 2017

Risk considerations for suicidal physicians

Doug Brunk
Clinical Psychiatry News
Publish date: February 27, 2017

Here are two excerpts:

According to the American Foundation for Suicide Prevention, 300-400 physicians take their own lives every year, the equivalent of two to three medical school classes. “That’s a doctor a day we lose to suicide,” said Dr. Myers, a professor of clinical psychiatry at State University of New York, Brooklyn, who specializes in physician health. Compared with the general population, the suicide rate ratio is 2.27 among female physicians and 1.41 among male physicians (Am J Psychiatry. 2004;161[12]:2295-2302), and an estimated 85%-90% of those who carry out a suicide have a psychiatric illness such as major depressive disorder, bipolar disorder, alcohol use and substance use disorder, and borderline personality disorder. Other triggers common to physicians, Dr. Myers said, include other kinds of personality disorders, burnout, untreated anxiety disorders, substance/medication-induced depressive disorder (especially in clinicians who have been self-medicating), and posttraumatic stress disorder.

(cut)

Inadequate treatment can occur for physician patients because of transference and countertransference dynamics “that muddle the treatment dyad,” Dr. Myers added. “We must be mindful of the many issues that are going on when we treat our own.”

Association Between Physician Burnout and Identification With Medicine as a Calling

Andrew J. Jager, MA, Michael A. Tutty, PhD, Audiey C. Kao, PhD Audiey C. Kao
Mayo Clinic Proceedings
DOI: http://dx.doi.org/10.1016/j.mayocp.2016.11.012

Objective

To evaluate the association between degree of professional burnout and physicians' sense of calling.

Participants and Methods

US physicians across all specialties were surveyed between October 24, 2014, and May 29, 2015. Professional burnout was assessed using a validated single-item measure. Sense of calling, defined as committing one's life to personally meaningful work that serves a prosocial purpose, was assessed using 6 validated true-false items. Associations between burnout and identification with calling items were assessed using multivariable logistic regressions.

Results

A total of 2263 physicians completed surveys (63.1% response rate). Among respondents, 28.5% (n=639) reported experiencing some degree of burnout. Compared with physicians who reported no burnout symptoms, those who were completely burned out had lower odds of finding their work rewarding (odds ratio [OR], 0.05; 95% CI, 0.02-0.10; P<.001), seeing their work as one of the most important things in their lives (OR, 0.38; 95% CI, 0.21-0.69; P<.001), or thinking their work makes the world a better place (OR, 0.38; 95% CI, 0.17-0.85; P=.02). Burnout was also associated with lower odds of enjoying talking about their work to others (OR, 0.23; 95% CI, 0.13-0.41; P<.001), choosing their work life again (OR, 0.11; 95% CI, 0.06-0.20; P<.001), or continuing with their current work even if they were no longer paid if they were financially stable (OR, 0.30; 95% CI, 0.15-0.59; P<.001).

Conclusion

Physicians who experience more burnout are less likely to identify with medicine as a calling. Erosion of the sense that medicine is a calling may have adverse consequences for physicians as well as those for whom they care.

Wednesday, March 29, 2017

Neuroethics and the Ethical Parity Principle

DeMarco, J.P. & Ford, P.J.
Neuroethics (2014) 7: 317.
doi:10.1007/s12152-014-9211-6

Abstract

Neil Levy offers the most prominent moral principles that are specifically and exclusively designed to apply to neuroethics. His two closely related principles, labeled as versions of the ethical parity principle (EPP), are intended to resolve moral concerns about neurological modification and enhancement [1]. Though EPP is appealing and potentially illuminating, we reject the first version and substantially modify the second. Since his first principle, called EPP (strong), is dependent on the contention that the mind literally extends into external props such as paper notebooks and electronic devices, we begin with an examination of the extended mind hypothesis (EMH) and its use in Levy’s EPP (strong). We argue against reliance on EMH as support for EPP (strong). We turn to his second principle, EPP (weak), which is not dependent on EMH but is tied to the acceptable claim that the mind is embedded in, because dependent on, external props. As a result of our critique of EPP (weak), we develop a modified version of EPP (weak), which we argue is more acceptable than Levy’s principle. Finally, we evaluate the applicability of our version of EPP (weak).

The article is here.

Philosopher Daniel Dennett on AI, robots and religion

John Thornhill
Financial Times
Originally published March 3, 2017

Here are two excerpts:

AI experts tend to draw a sharp distinction between machine intelligence and human consciousness. Dennett is not so sure. Where many worry that robots are becoming too human, he argues humans have always been largely robotic. Our consciousness is the product of the interactions of billions of neurons that are all, as he puts it, “sorta robots”.

“I’ve been arguing for years that, yes, in principle it’s possible for human consciousness to be realised in a machine. After all, that’s what we are,” he says. “We’re robots made of robots made of robots. We’re incredibly complex, trillions of moving parts. But they’re all non-miraculous robotic parts.”

(cut)

The term “inversion of reason”, he says, came from one of Darwin’s 19th-century critics, outraged at the biologist’s counterintuitive thinking. Rather than accepting that an absolute intelligence was responsible for the creation of species, the critic denounced Darwin for believing that absolute ignorance had accomplished all the marvels of creative skill. “And of course that’s right. That’s exactly what Darwin was saying. Darwin says the nightingale is created by a process with no intelligence at all. So that’s the first inversion of reasoning.”

The article is here.

Tuesday, March 28, 2017

Why We Believe Obvious Untruths

Philip Fernbach & Steven Sloman
The New York Times
Originally published March 3, 2017

'How can so many people believe things that are demonstrably false? The question has taken on new urgency as the Trump administration propagates falsehoods about voter fraud, climate change and crime statistics that large swaths of the population have bought into. But collective delusion is not new, nor is it the sole province of the political right. Plenty of liberals believe, counter to scientific consensus, that G.M.O.s are poisonous, and that vaccines cause autism.

The situation is vexing because it seems so easy to solve. The truth is obvious if you bother to look for it, right? This line of thinking leads to explanations of the hoodwinked masses that amount to little more than name calling: “Those people are foolish” or “Those people are monsters.”

Such accounts may make us feel good about ourselves, but they are misguided and simplistic: They reflect a misunderstanding of knowledge that focuses too narrowly on what goes on between our ears. Here is the humbler truth: On their own, individuals are not well equipped to separate fact from fiction, and they never will be. Ignorance is our natural state; it is a product of the way the mind works.

What really sets human beings apart is not our individual mental capacity. The secret to our success is our ability to jointly pursue complex goals by dividing cognitive labor. Hunting, trade, agriculture, manufacturing — all of our world-altering innovations — were made possible by this ability. Chimpanzees can surpass young children on numerical and spatial reasoning tasks, but they cannot come close on tasks that require collaborating with another individual to achieve a goal. Each of us knows only a little bit, but together we can achieve remarkable feats.

Facebook Is Using Artificial Intelligence To Help Prevent Suicide

Alex Kantrowitz
BuzzFeed
Originally published March 1, 2017

Facebook is bringing its artificial intelligence expertise to bear on suicide prevention, an issue that’s been top of mind for CEO Mark Zuckerberg following a series of suicides livestreamed via the company’s Facebook Live video service in recent months.

“It’s hard to be running this company and feel like, okay, well, we didn’t do anything because no one reported it to us,” Zuckerberg told BuzzFeed News in an interview last month. “You want to go build the technology that enables the friends and people in the community to go reach out and help in examples like that.”

Today, Facebook is introducing an important piece of that technology — a suicide-prevention feature that uses AI to identify posts indicating suicidal or harmful thoughts. The AI scans the posts and their associated comments, compares them to others that merited intervention, and, in some cases, passes them along to its community team for review. The company plans to proactively reach out to users it believes are at risk, showing them a screen with suicide-prevention resources including options to contact a helpline or reach out to a friend.

The article is here.

Monday, March 27, 2017

Healthcare Data Breaches Up 40% Since 2015

Alexandria Wilson Pecci
MedPage Today
Originally posted February 26, 2017

Here is an excerpt:

Broken down by industry, hacking was the most common data breach source for the healthcare sector, according to data provided to HealthLeaders Media by the Identity Theft Resource Center. Physical theft was the biggest breach category for healthcare in 2015 and 2014.

Insider theft and employee error/negligence tied for the second most common data breach sources in 2016 in the health industry. In addition, insider theft was a bigger problem in the healthcare sector than in other industries, and has been for the past five years.

Insider theft is alleged to have been at play in the Jackson Health System incident. Former employee Evelina Sophia Reid was charged in a fourteen-count indictment with conspiracy to commit access device fraud, possessing fifteen or more unauthorized access devices, aggravated identity theft, and computer fraud, the Department of Justice said. Prosecutors say that her co-conspirators used the stolen information to file fraudulent tax returns in the patients' names.

The article is here.

US Researchers Found Guilty of Misconduct Collectively Awarded $101 Million

Joshua A. Krisch
The Scientist
February 27, 2017

Researchers found guilty of scientific misconduct by the US Department of Health and Human Services (HHS) went on to collectively receive $101 million from the National Institutes of Health (NIH), according to a study published this month (February 1) in the Journal of Empirical Research on Human Research Ethics. The authors also found that 47.2 percent of the researchers found guilty of misconduct they examined continue to publish studies.

The article is here.

The research is here.

Sunday, March 26, 2017

Moral Enhancement Using Non-invasive Brain Stimulation

R. Ryan Darby and Alvaro Pascual-Leone
Front. Hum. Neurosci., 22 February 2017
https://doi.org/10.3389/fnhum.2017.00077

Biomedical enhancement refers to the use of biomedical interventions to improve capacities beyond normal, rather than to treat deficiencies due to diseases. Enhancement can target physical or cognitive capacities, but also complex human behaviors such as morality. However, the complexity of normal moral behavior makes it unlikely that morality is a single capacity that can be deficient or enhanced. Instead, our central hypothesis will be that moral behavior results from multiple, interacting cognitive-affective networks in the brain. First, we will test this hypothesis by reviewing evidence for modulation of moral behavior using non-invasive brain stimulation. Next, we will discuss how this evidence affects ethical issues related to the use of moral enhancement. We end with the conclusion that while brain stimulation has the potential to alter moral behavior, such alteration is unlikely to improve moral behavior in all situations, and may even lead to less morally desirable behavior in some instances.

The article is here.

Saturday, March 25, 2017

White House Ethics Loophole for Ivanka 'Doesn't Work,' Say Watchdogs

Nika Knight
Common Dreams
Originally posted on March 24, 2017

Here are two excerpts:

The ethics advocates express "deep concern about the highly unusual and inappropriate arrangement that is being proposed for Ivanka Trump, the President's daughter, to play a formalized role in the White House without being required to comply with the ethics and disclosure requirements that apply to White House employees," arguing that the "arrangement appears designed to allow Ms. Trump to avoid the ethics, conflict-of-interest, and other rules that apply to White House employees."

(cut)

"The basic problem in the proposed relationship is that it appears to be trying to create a middle space that does not exist," the letter explains. "On the one hand Ms. Trump's position will provide her with the privileges and opportunities for public service that attach to being a White House employee. On the other hand, she remains the owner of a private business who is free from the ethics and conflicts rules that apply to White House employees."

The article is here.

Will Democracy Survive Big Data and Artificial Intelligence?

Dirk Helbing, Bruno S. Frey, Gerd Gigerenzer,  and others
Scientific American
Originally posted February 25, 2017

Here is an excerpt:

One thing is clear: the way in which we organize the economy and society will change fundamentally. We are experiencing the largest transformation since the end of the Second World War; after the automation of production and the creation of self-driving cars the automation of society is next. With this, society is at a crossroads, which promises great opportunities, but also considerable risks. If we take the wrong decisions it could threaten our greatest historical achievements.

(cut)

These technologies are also becoming increasingly popular in the world of politics. Under the label of “nudging,” and on massive scale, governments are trying to steer citizens towards healthier or more environmentally friendly behaviour by means of a "nudge"—a modern form of paternalism. The new, caring government is not only interested in what we do, but also wants to make sure that we do the things that it considers to be right. The magic phrase is "big nudging", which is the combination of big data with nudging. To many, this appears to be a sort of digital scepter that allows one to govern the masses efficiently, without having to involve citizens in democratic processes. Could this overcome vested interests and optimize the course of the world? If so, then citizens could be governed by a data-empowered “wise king”, who would be able to produce desired economic and social outcomes almost as if with a digital magic wand.

The article is here.

Friday, March 24, 2017

A cleansing fire: Moral outrage alleviates guilt and buffers threats to one’s moral identity

Rothschild, Z.K. & Keefer, L.A.
Motiv Emot (2017). doi:10.1007/s11031-017-9601-2

Abstract

Why do people express moral outrage? While this sentiment often stems from a perceived violation of some moral principle, we test the counter-intuitive possibility that moral outrage at third-party transgressions is sometimes a means of reducing guilt over one’s own moral failings and restoring a moral identity. We tested this guilt-driven account of outrage in five studies examining outrage at corporate labor exploitation and environmental destruction. Study 1 showed that personal guilt uniquely predicted moral outrage at corporate harm-doing and support for retributive punishment. Ingroup (vs. outgroup) wrongdoing elicited outrage at corporations through increased guilt, while the opportunity to express outrage reduced guilt (Study 2) and restored perceived personal morality (Study 3). Study 4 tested whether effects were due merely to downward social comparison and Study 5 showed that guilt-driven outrage was attenuated by an affirmation of moral identity in an unrelated context.

The article is here.