Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, October 31, 2017

Does Your Gut Always Steer You Right?

Elizabeth Bernstein
The Wall Street Journal
Originally published October 9, 2017

Here is an excerpt:

When should you trust your gut? Consult your gut for complex decisions.

These include important, but not life-or-death, choices such as what car to buy, where to move, which job offer to accept. Your conscious mind will have too much information to sort through, and there may not be one clear choice. For example, there’s a lot to consider when deciding on a new home: neighborhood (Close to work but not as fun? Farther away but nicer?), price, type of home (Condo or house?). Research shows that when people are given four choices of which car to buy or which apartment to rent—with slightly different characteristics to each—and then are distracted from consciously thinking about their decision, they make better choices. “Our conscious mind is not very good at having all these choices going on at once,” says Dr. Bargh. “When you let your mind work on this without paying conscious attention, you make a better decision.”

Using unconscious and conscious thought to make a decision is often best. And conscious thought should come first. An excellent way to do this is to make a list of the benefits and drawbacks of each choice you could make. We are trained in rational decision-making, so this will satisfy your conscious mind. And sometimes the list will be enough to show you a clear decision.

But if it isn’t, put it away and do something that absorbs your conscious mind. Go for a hike or run, walk on the beach, play chess, practice a musical instrument. (No vegging out in front of the TV; that’s too mind-numbing, experts say.) “Go into yourself without distractions from the outside, and your unconscious will keep working on the problem,” says Emeran Mayer, a gastroenterologist and neuroscientist and the author of “The Mind-Gut Connection” and a professor at UCLA’s David Geffen School of Medicine.

If the stakes are high, try to think rationally

Even if time is tight. For example, if your gut tells you to jump in front of a train to help someone who just fell on the tracks, that might be worth risking your life. If it’s telling you to jump in front of that train because you dropped your purse, it’s not. Your rational mind, not your gut, will know the difference, Dr. Bargh says.

The article is here.

Note: As usual, I don't agree with everything in this article.

Who Is Rachael? Blade Runner and Personal Identity

Helen Beebee
iai news
Originally posted October 5, 2017

It’s no coincidence that a lot of philosophers are big fans of science fiction. Philosophers like to think about far-fetched scenarios or ‘thought experiments’, explore how they play out, and think about what light they can shed on how we should think about our own situation. What if you could travel back in time? Would you be able to kill your own grandfather, thereby preventing him from meeting your grandmother, meaning that you would never have been born in the first place? What if we could somehow predict with certainty what people would do? Would that mean that nobody had free will? What if I was really just a brain wired up to a sophisticated computer running virtual reality software? Should it matter to me that the world around me – including other people – is real rather than a VR simulation? And how do I know that it’s not?

Questions such as these routinely get posed in sci-fi books and films, and in a particularly vivid and thought-provoking way. In immersing yourself in an alternative version of reality, and by identifying or sympathising with the characters and seeing things from their point of view, you can often get a much better handle on the question. Philip K. Dick – whose Do Androids Dream of Electric Sheep?, first published in 1968, is the story on which the 1982 film Blade Runner is based –  was a master at exploring these kinds of philosophical questions. Often the question itself is left unstated; his characters are generally not much prone to philosophical rumination on their situation. But it’s there in the background nonetheless, waiting for you to find it and to think about what the answer might be.

Some of the questions raised by the original Dick story don’t get any, or much, attention in Blade Runner. Mercerism – the peculiar quasi-religion of the book, which is based on empathy and which turns out to be founded on a lie  – doesn’t get a mention in the film. And while, in the film as in the book, the capacity for empathy is what (supposedly) distinguishes humans from androids (or, in the film, replicants; apparently by 1982 ‘android’ was considered too dated a word), in the film we don’t get the suggestion that the purported significance of empathy, through its role in Mercerism, is really just a ploy: a way of making everyone think that androids lack, as it were, the essence of personhood, and hence can be enslaved and bumped off with impunity.

The article is here.

Monday, October 30, 2017

Nobel Prize in Economics Awarded to American Richard Thaler

David Gauthier-Villars in Stockholm and Ben Leubsdorf in Washington
The Wall Street Journal
Originally posted October 9, 2017

Here are two excerpts:

Mr. Thaler “has given us new insight into how human psychology shapes decision-making,” the academy said.

Asked to describe the takeaway from his research, Mr. Thaler told the academy and reporters: “The most important lesson is that economic agents are humans and that economic models have to incorporate that.”

(cut)

“I’ll try to spend it as irrationally as possible,” Mr. Thaler said.

The article is here.

Human Gene Editing Marches On

bioethics.net
Originally published October 6, 2017

Here is an excerpt:

In all three cases, the main biologic approach, and the main ethical issues, are the same.  The main differences were which genes were being edited, and how the embryos were obtained.

This prompted Nature to run an editorial to say that it is “time to take stock” of the ethics of this research.  Read the editorial here.  The key points:  This is important work that should be undertaken thoughtfully.  Accordingly, donors of any embryos or cells should be fully informed of the planned research.  Only as many embryos should be created as are necessary to do the research.  Work on embryos should be preceded by work on pluripotent, or “reprogrammed,” stem cells, and if questions can be fully answered by work with those cells, then it may not be necessary to repeat the studies on whole, intact human embryos, and if that is not necessary, perhaps it should not be done.  Finally, everything should be peer reviewed.

I agree that editing work in non-totipotent cells should be at all times favored over work on intact embryos, but if one holds that an embryo is a human being that should have the benefits of protections afforded human research subjects, then Nature’s ethical principles are rather thin, little more than an extension of animal use provisions for studies in which early humans are the raw materials for the development of new medical treatments.

The article is here.

Sunday, October 29, 2017

Courage and Compassion: Virtues in Caring for So-Called “Difficult” Patients

Michael Hawking, Farr A. Curlin, and John D. Yoon
AMA Journal of Ethics. April 2017, Volume 19, Number 4: 357-363.

Abstract

What, if anything, can medical ethics offer to assist in the care of the “difficult” patient? We begin with a discussion of virtue theory and its application to medical ethics. We conceptualize the “difficult” patient as an example of a “moral stress test” that especially challenges the physician’s character, requiring the good physician to display the virtues of courage and compassion. We then consider two clinical vignettes to flesh out how these virtues might come into play in the care of “difficult” patients, and we conclude with a brief proposal for how medical educators might cultivate these essential character traits in physicians-in-training.

Here is an excerpt:

To give a concrete example of a virtue that will be familiar to anyone in medicine, consider the virtue of temperance. A temperate person exhibits appropriate self-control or restraint. Aristotle describes temperance as a mean between two extremes—in the case of eating, an extreme lack of temperance can lead to morbid obesity and its excess to anorexia. Intemperance is a hallmark of many of our patients, particularly among those with type 2 diabetes, alcoholism, or cigarette addiction. Clinicians know all too well the importance of temperance because they see the results for human beings who lack it—whether it be amputations and dialysis for the diabetic patient; cirrhosis, varices, and coagulopathy for the alcoholic patient; or chronic obstructive pulmonary disease and lung cancer for the lifelong smoker. In all of these cases, intemperance inhibits a person’s ability to flourish. These character traits do, of course, interact with social, cultural, and genetic factors in impacting an individual’s health, but a more thorough exploration of these factors is outside the scope of this paper.

The article is here.

Saturday, October 28, 2017

Post-conventional moral reasoning is associated with increased ventral striatal activity at rest and during task

Zhuo Fang, Wi Hoon Jung, Marc Korczykowski, Lijuan Luo, and others
Scientific Reports 7, Article number: 7105 (2017)

Abstract

People vary considerably in moral reasoning. According to Kohlberg’s theory, individuals who reach the highest level of post-conventional moral reasoning judge moral issues based on deeper principles and shared ideals rather than self-interest or adherence to laws and rules. Recent research has suggested the involvement of the brain’s frontostriatal reward system in moral judgments and prosocial behaviors. However, it remains unknown whether moral reasoning level is associated with differences in reward system function. Here, we combined arterial spin labeling perfusion and blood oxygen level-dependent functional magnetic resonance imaging and measured frontostriatal reward system activity both at rest and during a sequential risky decision making task in a sample of 64 participants at different levels of moral reasoning. Compared to individuals at the pre-conventional and conventional level of moral reasoning, post-conventional individuals showed increased resting cerebral blood flow in the ventral striatum and ventromedial prefrontal cortex. Cerebral blood flow in these brain regions correlated with the degree of post-conventional thinking across groups. Post-conventional individuals also showed greater task-induced activation in the ventral striatum during risky decision making. These findings suggest that high-level post-conventional moral reasoning is associated with increased activity in the brain’s frontostriatal system, regardless of task-dependent or task-independent states.

The article is here.

Friday, October 27, 2017

Is utilitarian sacrifice becoming more morally permissible?

Ivar R.Hannikainen, Edouard Machery, & Fiery A.Cushman
Cognition
Volume 170, January 2018, Pages 95-101

Abstract

A central tenet of contemporary moral psychology is that people typically reject active forms of utilitarian sacrifice. Yet, evidence for secularization and declining empathic concern in recent decades suggests the possibility of systematic change in this attitude. In the present study, we employ hypothetical dilemmas to investigate whether judgments of utilitarian sacrifice are becoming more permissive over time. In a cross-sectional design, age negatively predicted utilitarian moral judgment (Study 1). To examine whether this pattern reflected processes of maturation, we asked a panel to re-evaluate several moral dilemmas after an eight-year interval but observed no overall change (Study 2). In contrast, a more recent age-matched sample revealed greater endorsement of utilitarian sacrifice in a time-lag design (Study 3). Taken together, these results suggest that today’s younger cohorts increasingly endorse a utilitarian resolution of sacrificial moral dilemmas.


Here is a portion of the Discussion section:

A vibrant discussion among philosophers and cognitive scientists has focused on distinguishing the virtues and pitfalls of the human moral faculty (Bloom, 2017; Greene, 2014; Singer, 2005). On a pessimistic note, our results dovetail with evidence about the socialization and development of recent cohorts (e.g., Shonkoff et al., 2012): Utilitarian judgment has been shown to correlate with Machiavellian and psychopathic traits (Bartels & Pizarro, 2011), and also with the reduced capacity to distinguish felt emotions (Patil & Silani, 2014). At the same time, leading theories credit highly acclaimed instances of moral progress to the exercise of rational scrutiny over prevailing moral norms (Greene, 2014; Singer, 2005), and the persistence of parochialism and prejudice to the unbridled command of intuition (Bloom, 2017). From this perspective, greater disapproval of intuitive deontological principles among recent cohorts may stem from the documented rise in cognitive abilities (i.e., the Flynn effect; see Pietschnig & Voracek, 2015) and foreshadow an expanding commitment to the welfare-maximizing resolution of contemporary moral challenges.

Middle managers may turn to unethical behavior to face unrealistic expectations

Science Daily
Originally published October 5, 2017

While unethical behavior in organizations is often portrayed as flowing down from top management, or creeping up from low-level positions, a team of researchers suggest that middle management also can play a key role in promoting wide-spread unethical behavior among their subordinates.

In a study of a large telecommunications company, researchers found that middle managers used a range of tactics to inflate their subordinates' performance and deceive top management, according to Linda Treviño, distinguished professor of organizational behavior and ethics, Smeal College of Business, Penn State. The managers may have been motivated to engage in this behavior because leadership instituted performance targets that were unrealizable, she added.

(cut)

Middle managers also used a range of tactics to coerce their subordinates to keep up the ruse, including rewards for unethical behavior and public shaming for those who were reluctant to engage in the unethical tactics.

"Interestingly, what we didn't see is managers speaking up, we didn't see them pushing back against the unrealistic goals," said Treviño. "We know a lot about what we refer to as 'voice' in an organization and people are fearful and they tend to keep quiet for the most part."

The article is here.

The target article is here.

Thursday, October 26, 2017

After medical error, apology goes a long way

Science Daily
Originally posted October 2, 2017

Summary: Discussing hospital errors with patients leads to better patient safety without spurring a barrage of malpractice claims, new research shows.

In patient injury cases, revealing facts, offering apology does not lead to increase in lawsuits, study finds

Sometimes a straightforward explanation and an apology for what went wrong in the hospital goes a long way toward preventing medical malpractice litigation and improving patient safety.

That's what Michelle Mello, JD, PhD, and her colleagues found in a study to be published Oct. 2 in Health Affairs.

Mello, a professor of health research and policy and of law at Stanford University, is the lead author of the study. The senior author is Kenneth Sands, former senior vice president at Beth Israel Deaconess Medical Center.

Medical injuries are a leading cause of death in the United States. The lawsuits they spawn are also a major concern for physicians and health care facilities. So, hospital risk managers and liability insurers are experimenting with new approaches to resolving these disputes that channel them away from litigation.

The focus is on meeting patients' needs without requiring them to sue. Hospitals disclose accidents to patients, investigate and explain why they occurred, apologize and, in cases in which the harm was due to a medical error, offer compensation and reassurance that steps will be taken to keep it from happening again.

The article is here.

The target article is here.

DeepMind launches new research team to investigate AI ethics

James Vincent
The Verge
Originally posted October 4, 2017

Google’s AI subsidiary DeepMind is getting serious about ethics. The UK-based company, which Google bought in 2014, today announced the formation of a new research group dedicated to the thorniest issues in artificial intelligence. These include the problems of managing AI bias; the coming economic impact of automation; and the need to ensure that any intelligent systems we develop share our ethical and moral values.

DeepMind Ethics & Society (or DMES, as the new team has been christened) will publish research on these topics and others starting early 2018. The group has eight full-time staffers at the moment, but DeepMind wants to grow this to around 25 in a year’s time. The team has six unpaid external “fellows” (including Oxford philosopher Nick Bostrom, who literally wrote the book on AI existential risk) and will partner with academic groups conducting similar research, including The AI Now Institute at NYU, and the Leverhulme Centre for the Future of Intelligence.

The article is here.

Wednesday, October 25, 2017

Cultivating Humility and Diagnostic Openness in Clinical Judgment

John R. Stone
AMA Journal of Ethics. October 2017, Volume 19, Number 10: 970-977.

Abstract
In this case, a physician rejects a patient’s concerns that tainted water is harming the patient and her community. Stereotypes and biases regarding socioeconomic class and race/ethnicity, constraining diagnostic frameworks, and fixed first impressions could skew the physician’s judgment. This paper narratively illustrates how cultivating humility could help the physician truly hear the patient’s suggestions. The discussion builds on the multifaceted concept of cultural humility as a lifelong journey that addresses not only stereotypes and biases but also power inequalities and community inequities. Insurgent multiculturalism is a complementary concept. Through epistemic humility—which includes both intellectual and emotional components—and admitting uncertainty, physicians can enhance patients’ and families’ epistemic authority and health agency.

The article is here.

Physician licensing laws keep doctors from seeking care

Bab Nellis
Mayo Clinic New Network

Despite growing problems with psychological distress, many physicians avoid seeking mental health treatment due to concern for their license. Mayo Clinic research shows that licensing requirements in many states include questions about past mental health treatments or diagnoses, with the implication that they may limit a doctor's right to practice medicine. The findings appear today in Mayo Clinic Proceedings.

“Clearly, in some states, the questions physicians are required to answer to obtain or renew their license are keeping them from seeking the help they need to recover from burnout and other  emotional or mental health issues,” says Liselotte Dyrbye, M.D., a Mayo Clinic physician and first author of the article.

The researchers examined the licensing documents for physicians in all 50 states and Washington, D.C., and renewal applications from 48 states. They also collected data in a national survey of more than 5,800 physicians, including attitudes about seeking mental health care.

Nearly 40 percent of respondents said they would hesitate in seeking professional help for a mental health condition because they feared doing so could have negative impacts on their medical license.

The article is here.

The target article is here.

Tuesday, October 24, 2017

Gaslighting, betrayal and the boogeyman: Personal reflections on the American Psychological Association, PENS and the involvement of psychologists in torture

Nina Thomas
International Journal of Applied Psychoanalytic Studies

Abstract

The American Psychological Association's (APA's) sanctioning psychologists' involvement in “enhanced interrogations,” aka torture, authorized by the closely parsed re-interpretation of relevant law by the Bush administration, has roiled the association since it appointed a task force in 2005. The Psychological Ethics and National Security (PENS) task force, its composition, methods and outcomes have brought public shame to the profession, the association and its members. Having served on the task force and been involved in the aftermath, I offer reflections on my role to provide an insider's look at the struggle I experienced over loyalty to principle, profession, colleagues, and the association. Situating what occurred in the course of the PENS process and its aftermath within the framework of Freyd's and her collaborators ‘theory of “betrayal trauma,” in particular “institutional trauma,” I suggest that others too share similar feelings of profound betrayal by an organization with which so many of us have been identified over the course of many years. I explore the ways in which attachments have been challenged and undermined by what occurred. Among the questions I have grappled with are: Was I the betrayed or betrayer, or both? How can similar self-reflection usefully be undertaken both by the association itself and other members about their actions or inactions?

The article is here.

'The deserving’: Moral reasoning and ideological dilemmas in public responses to humanitarian communications

Irene Bruna Seu
British Journal of Social Psychology 55 (4), pp. 739-755.

Abstract

This paper investigates everyday moral reasoning in relation to donations and prosocial behaviour in a humanitarian context. The discursive analysis focuses on the principles of deservingness which members of the public use to decide who to help and under what conditions.  The paper discusses three repertoires of deservingness: 'Seeing a difference', 'Waiting in queues' and 'Something for nothing ' to illustrate participants' dilemmatic reasoning and to examine how the position of 'being deserving' is negotiated in humanitarian crises.  Discursive analyses of these dilemmatic repertoires of deservingness identify the cultural and ideological resources behind these constructions and show how humanitarianism intersects and clashes with other ideologies and value systems.  The data suggest that a neoliberal ideology, which endorses self-gratification and materialistic and individualistic ethics, and cultural assimilation of helper and receiver play important roles in decisions about humanitarian helping. The paper argues for the need for psychological research to engage more actively with the dilemmas involved in the moral reasoning related to humanitarianism and to contextualize decisions about giving and helping within the socio-cultural and ideological landscape in which the helper operates.

The research is here.

Monday, October 23, 2017

Holding People Responsible for Ethical Violations: The Surprising Benefits of Accusing Others

Jessica A. Kennedy and Maurice E. Schweitzer
Wharton Behavioral Lab

Abstract

Individuals who accuse others of unethical behavior can derive significant benefits.  Compared to individuals who do not make accusations, accusers engender greater trust and are perceived to have higher ethical standards. In Study 1, accusations increased trust in the accuser and lowered trust in the target. In Study 2, we find that accusations elevate trust in the accuser by boosting perceptions of the accuser’s ethical standards. In Study 3, we find that accusations boosted both attitudinal and behavioral trust in the accuser, decreased trust in the target, and promoted relationship conflict within the group. In Study 4, we examine the moderating role of moral hypocrisy. Compared to individuals who did not make an accusation, individuals who made an accusation were trusted more if they had acted ethically but not if they had acted unethically. Taken together, we find that accusations have significant interpersonal consequences. In addition to harming accused targets, accusations can substantially benefit accusers.

Here is part of the Discussion:

It is possible, however, that even as accusations promote group conflict, accusations could benefit organizations by enforcing norms and promoting ethical behavior. To ensure ethical conduct, organizations must set an ethical tone (Mayer et al., 2013). To do so, organizations need to encourage detection and punishment of unethical behavior. Punishment of norm violators has been conceptualized as an altruistic behavior (Fehr & Gachter, 2000). Our findings challenge this conceptualization. Rather than reflecting altruism, accusers may derive substantial personal benefits from punishing norm violators. The trust benefits of making an accusation provide a reason for even the most self-interested actors to intervene when they perceive unethical activity. That is, even when self-interest is the norm (e.g., Pillutla & Chen, 1999), individuals have trust incentives to openly oppose unethical behavior.

The research is here.

Reciprocity Outperforms Conformity to Promote Cooperation

Angelo Romano, Daniel Balliet
Psychological Sciences
First Published September 6, 2017

Abstract

Evolutionary psychologists have proposed two processes that could give rise to the pervasiveness of human cooperation observed among individuals who are not genetically related: reciprocity and conformity. We tested whether reciprocity outperformed conformity in promoting cooperation, especially when these psychological processes would promote a different cooperative or noncooperative response. To do so, across three studies, we observed participants’ cooperation with a partner after learning (a) that their partner had behaved cooperatively (or not) on several previous trials and (b) that their group members had behaved cooperatively (or not) on several previous trials with that same partner. Although we found that people both reciprocate and conform, reciprocity has a stronger influence on cooperation. Moreover, we found that conformity can be partly explained by a concern about one’s reputation—a finding that supports a reciprocity framework.

The article is here.

Sunday, October 22, 2017

A Car Crash And A Mistrial Cast Doubts On Court-Ordered Mental Health Exams

Steve Burger
Side Effect Media: Public Health/Personal Stories
Originally posted September 26, 2017

Here is an excerpt:

Investigating a lie

Fink was often hired by the courts in Indiana, and over the last ten years had performed dozens of these competency evaluations. His scene-of-the-crash confession called into question not only the Loving trial, but every one he ever worked on.

Courts rely on psychologists to assess the mental fitness of defendants, but Fink’s story raises serious questions about how courts determine mental competency in Indiana and what system of oversight is in place to ensure defendants get a valid examination.

The judge declared a mistrial in Caleb Loving’s case, but Fink’s confession prompted a massive months-long investigation in Vanderburgh County.

Hermann led the investigation, working to untangle a mess of nearly 70 cases for which Fink performed exams or testing, determined to discover the extent of the damage he had done.

“A lot of different agencies participated in that investigation,” Herman said. “It was a troubling case, in that someone who was literally hired by the court to come in and testify about something … [was] lying.”

The county auditor’s office provided payment histories of psychologists hired by the courts, and the Evansville Police Department spent hundreds of hours looking through records. The courts helped Hermann get access to the cases that Albert Fink had worked on.

Trump's ethics critics get their day in court

Julia Horowitz 
CNN.com
Originally published October 17, 2017

Ethics experts have been pressing President Trump in the media for months. On Wednesday, they'll finally get their day in court.

At the center of a federal lawsuit in New York is the U.S. Constitution's Foreign Emoluments Clause, which bars the president from accepting gifts from foreign governments without permission from Congress.

Citizens for Responsibility and Ethics in Washington, a watchdog group, will lay out its case before Judge George Daniels. Lawyers for the Justice Department have asked the judge to dismiss the case.

The obscure provision of the Constitution is an issue because Trump refused to sell his business holdings before the inauguration. Instead, he placed his assets in a trust and handed the reins of the Trump Organization to his two oldest sons, Don Jr. and Eric.

The terms of the trust make it so Trump can technically withdraw cash payments from his businesses any time he wants. He can also dissolve the trust when he leaves office -- so if his businesses do well, he'll ultimately profit.

CREW claims that because government leaders and entities frequent his hotels, clubs and restaurants, Trump is in breach of the Emoluments Clause. The fear is that international officials will try to curry favor with Trump by patronizing his properties.

The article is here.

Saturday, October 21, 2017

Thinking about the social cost of technology

Natasha Lomas
Tech Crunch
Originally posted September 30, 2017

Here is an excerpt:

Meanwhile, ‘users’ like my mum are left with another cryptic puzzle of unfamiliar pieces to try to slot back together and — they hope — return the tool to the state of utility it was in before everything changed on them again.

These people will increasingly feel left behind and unplugged from a society where technology is playing an ever greater day-to-day role, and also playing an ever greater, yet largely unseen role in shaping day to day society by controlling so many things we see and do. AI is the silent decision maker that really scales.

The frustration and stress caused by complex technologies that can seem unknowable — not to mention the time and mindshare that gets wasted trying to make systems work as people want them to work — doesn’t tend to get talked about in the slick presentations of tech firms with their laser pointers fixed on the future and their intent locked on winning the game of the next big thing.

All too often the fact that human lives are increasingly enmeshed with and dependent on ever more complex, and ever more inscrutable, technologies is considered a good thing. Negatives don’t generally get dwelled on. And for the most part people are expected to move along, or be moved along by the tech.

That’s the price of progress, goes the short sharp shrug. Users are expected to use the tool — and take responsibility for not being confused by the tool.

But what if the user can’t properly use the system because they don’t know how to? Are they at fault? Or is it the designers failing to properly articulate what they’ve built and pushed out at such scale? And failing to layer complexity in a way that does not alienate and exclude?

And what happens when the tool becomes so all consuming of people’s attention and so capable of pushing individual buttons it becomes a mainstream source of public opinion? And does so without showing its workings. Without making it clear it’s actually presenting a filtered, algorithmically controlled view.

There’s no newspaper style masthead or TV news captions to signify the existence of Facebook’s algorithmic editors. But increasingly people are tuning in to social media to consume news.

This signifies a major, major shift.

The article is here.

Stunner On Birth Control: Trump’s Moral Exemption Is Geared To Just 2 Groups

Julie Rovner
Kaiser Health News
Originally posted October 16, 2017

Here is an excerpt:

So what’s the difference between religious beliefs and moral convictions?

“Theoretically, it would be someone who says ‘I don’t have a belief in God,’ but ‘I oppose contraception for reasons that have nothing to do with religion or God,’ ” said Mark Rienzi, a senior counsel for the Becket Fund for Religious Liberty, which represented many of the organizations that sued the Obama administration over the contraceptive mandate.

Nicholas Bagley, a law professor at the University of Michigan, said it would apply to “an organization that has strong moral convictions but does not associate itself with any particular religion.”

What kind of an organization would that be? It turns out not to be such a mystery, Rienzi and Bagley agreed.

Among the hundreds of organizations that sued over the mandate, two — the Washington, D.C.-based March for Life and the Pennsylvania-based Real Alternatives — are anti-abortion groups that do not qualify for religious exemptions. While their employees may be religious, the groups themselves are not.

The article is here.

Friday, October 20, 2017

A virtue ethics approach to moral dilemmas in medicine

P Gardiner
J Med Ethics. 2003 Oct; 29(5): 297–302.

Abstract

Most moral dilemmas in medicine are analysed using the four principles with some consideration of consequentialism but these frameworks have limitations. It is not always clear how to judge which consequences are best. When principles conflict it is not always easy to decide which should dominate. They also do not take account of the importance of the emotional element of human experience. Virtue ethics is a framework that focuses on the character of the moral agent rather than the rightness of an action. In considering the relationships, emotional sensitivities, and motivations that are unique to human society it provides a fuller ethical analysis and encourages more flexible and creative solutions than principlism or consequentialism alone. Two different moral dilemmas are analysed using virtue ethics in order to illustrate how it can enhance our approach to ethics in medicine.

A pdf download of the article can be found here.

Note from John: This article is interesting for a myriad of reasons. For me, we ethics educators have come a long way in 14 years.

The American Psychological Association and torture: How could it happen?

Bryan Welch
International Journal of Applied Psychoanalytic Studies
Volume 14 (2)

Here is an excerpt:

This same grandiosity was ubiquitous in the governance's rhetoric at the heart of the association's discussions on torture. Banning psychologists' participation in reputed torture mills was clearly unnecessary, proponents of the APA policy argued. To do so would be an “insult” to military psychologists everywhere. No psychologist would ever engage in torture. Insisting on a change in APA policy reflected a mean-spirited attitude toward the military psychologists. The supporters of the APA policy managed to transform the military into the victims in the interrogation issue.

In the end, however, it was psychologists' self-assumed importance that carried the day on the torture issue. Psychologists' participation in these detention centers, it was asserted, was an antidote to torture, since psychologists' very presence could protect the potential torture victims (presumably from Rumsfeld and Cheney, no less!). The debates on the APA Council floor, year after year, concluded with the general consensus that, indeed, psychology was very, very important to our nation's security. In fact the APA Ethics Director repeatedly advised members of the APA governance that psychologists' presence was necessary to make sure the interrogations were “safe, legal, ethical, and effective.”

We psychologists were both too good and too important to join our professional colleagues in other professions who were taking an absolutist moral position against one of the most shameful eras in our country's history. While the matter was clearly orchestrated by others, it was this self-reinforcing grandiosity that led the traditionally liberal APA governance down the slippery slope to the Bush administration's torture program.

During this period I had numerous personal communications with members of the APA governance structure in an attempt to dissuade them from ignoring the rank-and-file psychologists who abhorred the APA's position. I have been involved in many policy disagreements over the course of my career, but the smugness and illogic that characterized the response to these efforts were astonishing and went far beyond normal, even heated, give and take. Most dramatically, the intelligence that I have always found to characterize the profession of psychology was sorely lacking.

Thursday, October 19, 2017

‘But you can’t do that!’ Why immoral actions seem impossible

Jonathan Phillips
Aeon
Originally posted September 29, 2017

Suppose that you’re on the way to the airport to catch a flight, but your car breaks down. Some of the actions you immediately consider are obvious: you might try to call a friend, look for a taxi, or book a later flight. If those don’t work out, you might consider something more far-fetched, such as finding public transportation or getting the tow-truck driver to tow you to the airport. But here’s a possibility that would likely never come to mind: you could take a taxi but not pay for it when you get to the airport. Why wouldn’t you think of this? After all, it’s a pretty sure-fire way to get to the airport on time, and it’s definitely cheaper than having your car towed.

One natural answer is that you don’t consider this possibility because you’re a morally good person who wouldn’t actually do that. But there are at least two reasons why this doesn’t seem like a compelling answer to the question, even if you are morally good. The first is that, though being a good person would explain why you wouldn’t actually do this, it doesn’t seem to explain why you wouldn’t have been able to come up with this as a solution in the first place. After all, your good moral character doesn’t stop you from admitting that it is a way of getting to the airport, even if you wouldn’t go through with it. And the second reason is that it seems equally likely that you wouldn’t have come up with this possibility for someone else in the same situation – even someone whom you didn’t know was morally good.

So what does explain why we don’t consider the possibility of taking a taxi but not paying? Here’s a radically different suggestion: before I mentioned it, you didn’t think it was even possible to do that. This explanation probably strikes you as too strong, but the key to it is that I’m not arguing that you think it’s impossible now, I’m arguing that you didn’t think it was possible before I proposed it.

Is There an Ideal Amount of Income Inequality?

Brian Gallagher
Nautilus
Originally published September 28, 2017

Here is an excerpt:

Is extreme inequality a serious problem?

Extreme inequality in the United States, and elsewhere, is deeply troubling on a number of fronts. First, there is the moral issue. For a country explicitly founded on the principles of liberty, equality, and the pursuit of happiness, protected by the “government of the people, by the people, for the people,” extreme inequality raises troubling questions of social justice that get at the very foundations of our society. We seem to have a “government of the 1 percent by the 1 percent for the 1 percent,” as the economics Nobel laureate Joseph Stiglitz wrote in his Vanity Fair essay. The Harvard philosopher Tim Scanlon argues that extreme inequality is bad for the following reasons: (1) economic inequality can give wealthier people an unacceptable degree of control over the lives of others; (2) economic inequality can undermine the fairness of political institutions; (3) economic inequality undermines the fairness of the economic system itself; and (4) workers, as participants in a scheme of cooperation that produces national income, have a claim to a fair share of what they have helped to produce.

You’re an engineer. How did you get interested in inequality?

I do design, control, optimization, and risk management for a living. I’m used to designing large systems, like chemical plants. I have a pretty good intuition for how systems will operate, how  they can run efficiently, and how they may fail. When I started thinking about the free market and society as systems, I already had an intuitive grasp about their function. Clearly there are differences between a system of inanimate entities, like chemical plants, and human society. But they’re both systems, so there’s a lot of commonalities as well. My experience as a systems engineer helped me as I was groping in the darkness to get my hand around these issues, and to ask the right questions.

The article is here.

Wednesday, October 18, 2017

When Doing Some Good Is Evaluated as Worse Than Doing No Good at All

George E. Newman and Daylian M. Cain
Psychological Science published online 8 January 2014

Abstract

In four experiments, we found that the presence of self-interest in the charitable domain was seen as tainting: People evaluated efforts that realized both charitable and personal benefits as worse than analogous behaviors that produced no charitable benefit. This tainted-altruism effect was observed in a variety of contexts and extended to both moral evaluations of other agents and participants’ own behavioral intentions (e.g., reported willingness to hire someone or purchase a company’s products). This effect did not seem to be driven by expectations that profits would be realized at the direct cost of charitable benefits, or the explicit use of charity as a means to an end. Rather, we found that it was related to the accessibility of different counterfactuals: When someone was charitable for self-interested reasons, people considered his or her behavior in the absence of self-interest, ultimately concluding that the person did not behave as altruistically as he or she could have. However, when someone was only selfish, people did not spontaneously consider whether the person could have been more altruistic.

The article is here.

Danny Kahneman on AI versus Humans


NBER Economics of AI Workshop 2017

Here is a rough translation of an excerpt:

One point made yesterday was the uniqueness of humans when it comes to evaluations. It was called “judgment”. Here in my noggin it’s “evaluation of outcomes”: the utility side of the decision function. I really don’t see why that should be reserved to humans.

I’d like to make the following argument:
  1. The main characteristic of people is that they’re very “noisy”.
  2. You show them the same stimulus twice, they don’t give you the same response twice.
  3. You show the same choice twice I mean—that’s why we had stochastic choice theory because thereis so much variability in people’s choices given the same stimuli.
  4. Now what can be done even without AI is a program that observes an individual that will be better than the individual and will make better choices for the individual by because it will be noise-free.
  5. We know from the literature that Colin cited on predictions an interesting tidbit:
  6. If you take clinicians and you have them predict some criterion a large number of time and then you develop a simple equation that predicts not the outcome but the clinicians judgment, that model does better in predicting the outcome then the clinician.
  7. That is fundamental.
This is telling you that one of the major limitations on human performance is not bias it is just noise.
I’m maybe partly responsible for this, but people now when they talk about error tend to think of bias as an explanation: the first thing that comes to mind. Well, there is bias. And it is an error. But in fact most of the errors that people make are better viewed as this random noise. And there’s an awful lot of it.

The entire transcript and target article is here.

Tuesday, October 17, 2017

Work and the Loneliness Epidemic

Vivek Murphy
Harvard Business Review

Here is an excerpt:

During my years caring for patients, the most common pathology I saw was not heart disease or diabetes; it was loneliness. The elderly man who came to our hospital every few weeks seeking relief from chronic pain was also looking for human connection: He was lonely. The middle-aged woman battling advanced HIV who had no one to call to inform that she was sick: She was lonely too. I found that loneliness was often in the background of clinical illness, contributing to disease and making it harder for patients to cope and heal.

This may not surprise you. Chances are, you or someone you know has been struggling with loneliness. And that can be a serious problem. Loneliness and weak social connections are associated with a reduction in lifespan similar to that caused by smoking 15 cigarettes a day and even greater than that associated with obesity. But we haven’t focused nearly as much effort on strengthening connections between people as we have on curbing tobacco use or obesity. Loneliness is also associated with a greater risk of cardiovascular disease, dementia, depression, and anxiety. At work, loneliness reduces task performance, limits creativity, and impairs other aspects of executive function such as reasoning and decision making. For our health and our work, it is imperative that we address the loneliness epidemic quickly.

Once we understand the profound human and economic costs of loneliness, we must determine whose responsibility it is to address the problem.

The article is here.

Is it Ethical for Scientists to Create Nonhuman Primates with Brain Disorders?

Carolyn P. Neuhaus
The Hastings Center
Originally published on September 25, 2017

Here is an excerpt:

Such is the rationale for creating primate models: the brain disorders under investigation cannot be accurately modelled in other nonhuman organisms, because of differences in genetics, brain structure, and behaviors. But research involving humans with brain disorders is also morally fraught. Some people with brain disorders experience impairments to decision-making capacity as a component or symptom of disease, and therefore are unable to provide truly informed consent to research participation. Some of the research is too invasive, and would be grossly unethical to carry out with human subjects. So, nonhuman primates, and macaques in particular, occupy a “sweet spot.” Their genetic code and brain structure are sufficiently similar to humans’ so as to provide a valid and accurate model of human brain disorders. But, they are not conferred protections from research that apply to humans and to some non-human primates, notably chimpanzees and great apes. In the United States, for example, chimpanzees are protected from invasive research, but other primates are not. Some have suggested, including in a recent article in Journal of Medical Ethics, that protections like those afforded to chimpanzees ought to be extended to other primates and other animals, such as dogs, as evidence mounts that they also have complex cognitive, social, and emotional lives. For now, macaques and other primates remain in use.

Prior to the discovery of genome editing tools like ZFNs, TALENs, and most recently, CRISPR, it was extremely challenging, almost to the point of prohibitive, to create non-human primates with precise, heritable genome modifications. But CRISPR (Clustered Randomized Interspersed Palindromic Repeat) presents a technological advance that brings genome engineering of non-human primates well within reach.

The article is here.

Monday, October 16, 2017

Can we teach robots ethics?

Dave Edmonds
BBC.com
Originally published October 15, 2017

Here is an excerpt:

However machine learning throws up problems of its own. One is that the machine may learn the wrong lessons. To give a related example, machines that learn language from mimicking humans have been shown to import various biases. Male and female names have different associations. The machine may come to believe that a John or Fred is more suitable to be a scientist than a Joanna or Fiona. We would need to be alert to these biases, and to try to combat them.

A yet more fundamental challenge is that if the machine evolves through a learning process we may be unable to predict how it will behave in the future; we may not even understand how it reaches its decisions. This is an unsettling possibility, especially if robots are making crucial choices about our lives. A partial solution might be to insist that if things do go wrong, we have a way to audit the code - a way of scrutinising what's happened. Since it would be both silly and unsatisfactory to hold the robot responsible for an action (what's the point of punishing a robot?), a further judgement would have to be made about who was morally and legally culpable for a robot's bad actions.

One big advantage of robots is that they will behave consistently. They will operate in the same way in similar situations. The autonomous weapon won't make bad choices because it is angry. The autonomous car won't get drunk, or tired, it won't shout at the kids on the back seat. Around the world, more than a million people are killed in car accidents each year - most by human error. Reducing those numbers is a big prize.

The article is here.

No Child Left Alone: Moral Judgments about Parents Affect Estimates of Risk to Children

Thomas, A. J., Stanford, P. K., & Sarnecka, B. W. (2016).
Collabra, 2(1), 10.

Abstract

In recent decades, Americans have adopted a parenting norm in which every child is expected to be under constant direct adult supervision. Parents who violate this norm by allowing their children to be alone, even for short periods of time, often face harsh criticism and even legal action. This is true despite the fact that children are much more likely to be hurt, for example, in car accidents. Why then do bystanders call 911 when they see children playing in parks, but not when they see children riding in cars? Here, we present results from six studies indicating that moral judgments play a role: The less morally acceptable a parent’s reason for leaving a child alone, the more danger people think the child is in. This suggests that people’s estimates of danger to unsupervised children are affected by an intuition that parents who leave their children alone have done something morally wrong.

Here is part of the discussion:

The most important conclusion we draw from this set of experiments is the following: People don’t only think that leaving children alone is dangerous and therefore immoral. They also think it is immoral and therefore dangerous. That is, people overestimate the actual danger to children who are left alone by their parents, in order to better support or justify their moral condemnation of parents who do so.

This brings us back to our opening question: How can we explain the recent hysteria about unsupervised children, often wildly out of proportion to the actual risks posed by the situation? Our findings suggest that once a moralized norm of ‘No child left alone’ was generated, people began to feel morally outraged by parents who violated that norm. The need (or opportunity) to better support or justify this outrage then elevated people’s estimates of the actual dangers faced by children. These elevated risk estimates, in turn, may have led to even stronger moral condemnation of parents and so on, in a self-reinforcing feedback loop.

The article is here.

Sunday, October 15, 2017

Official sends memo to agency leaders about ethical conduct

Avery Anapol
The Hill
Originally published October 10, 2017

The head of the Office of Government Ethics is calling on the leaders of government agencies to promote an “ethical culture.”

David Apol, acting director of the ethics office, sent a memo to agency heads titled, “The Role of Agency Leaders in Promoting an Ethical Culture.” The letter was sent to more than 100 agency heads, CNN reported.

“It is essential to the success of our republic that citizens can trust that your decisions and the decisions made by your agency are motivated by the public good and not by personal interests,” the memo reads.

Several government officials are under investigation for their use of chartered planes for government business.

One Cabinet official, former Health secretary Tom Price, resigned over his use of private jets. Treasury Secretary Steven Mnuchin is also under scrutiny for his travels.

“I am deeply concerned that the actions of some in Government leadership have harmed perceptions about the importance of ethics and what conduct is, and is not, permissible,” Apol wrote.

The memo includes seven suggested actions that Apol says leaders should take to strengthen the ethical culture in their agencies. The suggestions include putting ethics officials in senior leadership meetings, and “modeling a ‘Should I do it?’ mentality versus a ‘Can I do it?’ mentality.”

The article is here.

Saturday, October 14, 2017

Who Sees What as Fair? Mapping Individual Differences in Valuation of Reciprocity, Charity,and Impartiality

Laura Niemi and Liane Young
Social Justice Research

When scarce resources are allocated, different criteria may be considered: impersonal allocation (impartiality), the needs of specific individuals (charity), or the relational ties between individuals (reciprocity). In the present research, we investigated how people’s perspectives on fairness relate to individual differences in interpersonal orientations. Participants evaluated the fairness of allocations based on (a) impartiality (b) charity, and (c) reciprocity. To assess interpersonal orientations, we administered measures of dispositional empathy (i.e., empathic concern and perspective-taking) and Machiavellianism. Across two studies, Machiavellianism correlated with higher ratings of reciprocity as fair, whereas empathic concern and perspective taking correlated with higher ratings of charity as fair. We discuss these findings in relation to recent neuroscientific research on empathy, fairness, and moral evaluations of resource allocations.

The article is here.

Friday, October 13, 2017

Moral Distress: A Call to Action

The Editor
AMA Journal of Ethics. June 2017, Volume 19, Number 6: 533-536.

During medical school, I was exposed for the first time to ethical considerations that stemmed from my new role in the direct provision of patient care. Ethical obligations were now both personal and professional, and I had to navigate conflicts between my own values and those of patients, their families, and other members of the health care team. However, I felt paralyzed by factors such as my relative lack of medical experience, low position in the hospital hierarchy, and concerns about evaluation. I experienced a profound and new feeling of futility and exhaustion, one that my peers also often described.

I have since realized that this experience was likely “moral distress,” a phenomenon originally described by Andrew Jameton in 1984. For this issue, the following definition, adapted from Jameton, will be used: moral distress occurs when a clinician makes a moral judgment about a case in which he or she is involved and an external constraint makes it difficult or impossible to act on that judgment, resulting in “painful feelings and/or psychological disequilibrium”. Moral distress has subsequently been shown to be associated with burnout, which includes poor coping mechanisms such as moral disengagement, blunting, denial, and interpersonal conflict.

Moral distress as originally conceived by Jameton pertained to nurses and has been extensively studied in the nursing literature. However, until a few years ago, the literature has been silent on the moral distress of medical students and physicians.

The article is here.

Automation on our own terms

Benedict Dellot and Fabian Wallace-Stephens
Medium.com
Originally published September 17, 2017

Here is an excerpt:

There are three main risks of embracing AI and robotics unreservedly:
  1. A rise in economic inequality — To the extent that technology deskills jobs, it will put downward pressure on earnings. If jobs are removed altogether as a result of automation, the result will be greater returns for those who make and deploy the technology, as well as the elite workers left behind in firms. The median OECD country has already seen a decrease in its labour share of income of about 5 percentage points since the early 1990s, with capital’s share swallowing the difference. Another risk here is market concentration. If large firms continue to adopt AI and robotics at a faster rate than small firms, they will gain enormous efficiency advantages and as a result could take excessive share of markets. Automation could lead to oligopolistic markets, where a handful of firms dominate at the expense of others.
  2. A deepening of geographic disparities — Since the computer revolution of the 1980s, cities that specialise in cognitive work have gained a comparative advantage in job creation. In 2014, 5.5 percent of all UK workers operated in new job types that emerged after 1990, but the figure for workers in London was almost double that at 9.8 percent. The ability of cities to attract skilled workers, as well as the diverse nature of their economies, makes them better placed than rural areas to grasp the opportunities of AI and robotics. The most vulnerable locations will be those that are heavily reliant on a single automatable industry, such as parts of the North East that have a large stock of call centre jobs.
  3. An entrenchment of demographic biases — If left untamed, automation could disadvantage some demographic groups. Recall our case study analysis of the retail sector, which suggested that AI and robotics might lead to fewer workers being required in bricks and mortar shops, but more workers being deployed in warehouse operative roles. Given women are more likely to make up the former and men the latter, automation in this case could exacerbate gender pay and job differences. It is also possible that the use of AI in recruitment (e.g. algorithms that screen CVs) could amplify workplace biases and block people from employment based on their age, ethnicity or gender.

Thursday, October 12, 2017

The Data Scientist Putting Ethics In AI

By Poornima Apte
The Daily Dose
Originally published SEPT 25 2017

Here is an excerpt:

Chowdhury’s other personal goal — to make AI accessible to everyone — is noble, but if the technology’s ramifications are not yet fully known, might it not also be dangerous? Doomsday scenarios — AI as the rapacious monster devouring all our jobs — put forward in the media may not be in our immediate futures, but Alexandra Whittington does worry that implicit human biases could make their way into the AI of the future — a problem that might be exacerbated if not accounted for early on, before any democratization of the tools occurs. Whittington is a futurist and foresight director at Fast Future. She points to a recent example of AI in law where the “robot-lawyer” was named Ross, and the legal assistant had a woman’s name, Cara. “You look at Siri and Cortana, they’re women, right?” Whittington says. “But they’re assistants, not the attorney or the accountant.” It’s the whole garbage-in, garbage-out theory, she says, cautioning against an overly idealistic approach toward the technology.

The article is here.

New Theory Cracks Open the Black Box of Deep Learning

Natalie Wolchover
Quanta Magazine
Originally published September 21, 2017

Here is an excerpt:

In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new computer experiments by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during deep learning, at least in the cases they studied.

Tishby’s findings have the AI community buzzing. “I believe that the information bottleneck idea could be very important in future deep neural network research,” said Alex Alemi of Google Research, who has already developed new approximation methods for applying an information bottleneck analysis to large deep neural networks. The bottleneck could serve “not only as a theoretical tool for understanding why our neural networks work as well as they do currently, but also as a tool for constructing new objectives and architectures of networks,” Alemi said.

Some researchers remain skeptical that the theory fully accounts for the success of deep learning, but Kyle Cranmer, a particle physicist at New York University who uses machine learning to analyze particle collisions at the Large Hadron Collider, said that as a general principle of learning, it “somehow smells right.”

The article is here.

Wednesday, October 11, 2017

Moral programming will define the future of autonomous transportation

Josh Althauser
Venture Beat
Originally published September 24, 2017

Here is an excerpt:

First do no harm?

Regardless of public sentiment, driverless cars are coming. Giants like Tesla Motors and Google have already poured billions of dollars into their respective technologies with reasonable success, and Elon Musk has said that we are much closer to a driverless future than most suspect. Robotics software engineers are making strides in self-driving AI at an awe-inspiring (and, for some, alarming) rate.

Beyond our questions of whether we want to hand over the wheel to software, there are deeper, more troubling questions that must be asked. Regardless of current sentiment, driverless cars are on their way. The real questions we should be asking as we edge closer to completely autonomous roadways lie in ethically complex areas. Among these areas of concern, one very difficult question stands out. Should we program driverless cars to kill?

At first, the answer seems obvious. No AI should have the ability to choose to kill a human. We can more easily reconcile death that results from a malfunction of some kind — brakes that give out, a failure of the car’s visual monitoring system, or a bug in the AI’s programmatic makeup. However, defining how and when AI can inflict harm isn’t that simple.

The article is here.

The guide psychologists gave carmakers to convince us it’s safe to buy self-driving cars

Olivia Goldhill
Quartz.com
Originally published September 17, 2017

Driverless cars sound great in theory. They have the potential to save lives, because humans are erratic, distracted, and often bad drivers. Once the technology is perfected, machines will be far better at driving safely.

But in practice, the notion of putting your life into the hands of an autonomous machine—let alone facing one as a pedestrian—is highly unnerving. Three out of four Americans are afraid to get into a self-driving car, an American Automobile Association survey found earlier this year.

Carmakers working to counter those fears and get driverless cars on the road have found an ally in psychologists. In a paper published this week in Nature Human Behavior, three professors from MIT Media Lab, Toulouse School of Economics, and the University of California at Irvine discuss widespread concerns and suggest psychological techniques to help allay them:

Who wants to ride in a car that would kill them to save pedestrians?

First, they address the knotty problem of how self-driving cars will be programmed to respond if they’re in a situation where they must either put their own passenger or a pedestrian at risk. This is a real world version of an ethical dilemma called “The Trolley Problem.”

The article is here.

Tuesday, October 10, 2017

How AI & robotics are transforming social care, retail and the logistics industry

Benedict Dellot and Fabian Wallace-Stephens
RSA.org
Originally published September 18, 2017

Here is an excerpt:

The CHIRON project

CHIRON is a two year project funded by Innovate UK. It strives to design care robotics for the future with a focus on dignity, independence and choice. CHIRON is a set of intelligent modular robotic systems, located in multiple positions around the home. Among its intended uses are to help people with personal hygiene tasks in the morning, get ready for the day, and support them in preparing meals in the kitchen. CHIRON’s various components can be mixed and matched to enable the customer to undertake a wide range of domestic and self-care tasks independently, or to enable a care worker to assist an increased number of customers.

The vision for CHIRON is to move from an ‘end of life’ institutional model, widely regarded as unsustainable and not fit for purpose, to a more dynamic and flexible market that offers people greater choice in the care sector when they require it.

The CHIRON project is being managed by a consortium led by Designability. The key technology partners are Bristol Robotics Laboratory and Shadow Robot Company, who have considerable expertise in conducting pioneering research and development in robotics. Award winning social enterprise care provider, Three Sisters Care will bring user-centred design to the core of the project. Smart Homes & Buildings Association will work to introduce the range of devices that will create CHIRON and make it a valuable presence in people’s homes.

The article is here.

Reasons Probably Won’t Change Your Mind: The Role of Reasons in Revising Moral Decisions

Stanley, M. L., Dougherty, A. M., Yang, B. W., Henne, P., & De Brigard, F. (2017).
Journal of Experimental Psychology: General. Advance online publication.

Abstract

Although many philosophers argue that making and revising moral decisions ought to be a matter of deliberating over reasons, the extent to which the consideration of reasons informs people’s moral decisions and prompts them to change their decisions remains unclear. Here, after making an initial decision in 2-option moral dilemmas, participants examined reasons for only the option initially chosen (affirming reasons), reasons for only the option not initially chosen (opposing reasons), or reasons for both options. Although participants were more likely to change their initial decisions when presented with only opposing reasons compared with only affirming reasons, these effect sizes were consistently small. After evaluating reasons, participants were significantly more likely not to change their initial decisions than to change them, regardless of the set of reasons they considered. The initial decision accounted for most of the variance in predicting the final decision, whereas the reasons evaluated accounted for a relatively small proportion of the variance in predicting the final decision. This resistance to changing moral decisions is at least partly attributable to a biased, motivated evaluation of the available reasons: participants rated the reasons supporting their initial decisions more favorably than the reasons opposing their initial decisions, regardless of the reported strategy used to make the initial decision. Overall, our results suggest that the consideration of reasons rarely induces people to change their initial decisions in moral dilemmas.

The article is here, behind a paywall.

You can contact the lead investigator for a personal copy.

Monday, October 9, 2017

Artificial Human Embryos Are Coming, and No One Knows How to Handle Them

Antonio Regalado
MIT Tech Review
September 19, 2017

Here is an excerpt:

Scientists at Michigan now have plans to manufacture embryoids by the hundreds. These could be used to screen drugs to see which cause birth defects, find others to increase the chance of pregnancy, or to create starting material for lab-generated organs. But ethical and political quarrels may not be far behind. “This is a hot new frontier in both science and bioethics. And it seems likely to remain contested for the coming years,” says Jonathan Kimmelman, a member of the bioethics unit at McGill University, in Montreal, and a leader of an international organization of stem-cell scientists.

What’s really growing in the dish? There no easy answer to that. In fact, no one is even sure what to call these new entities. In March, a team from Harvard University offered the catch-all “synthetic human entities with embryo-like features,” or SHEEFS, in a paper cautioning that “many new varieties” are on the horizon, including realistic mini-brains.

Shao, who is continuing his training at MIT, dug into the ethics question and came to his own conclusions. “Very early on in our research we started to pay attention to why are we doing this? Is it really necessary? We decided yes, we are trying to grow a structure similar to part of the human early embryo that is hard otherwise to study,” says Shao. “But we are not going to generate a complete human embryo. I can’t just consider my feelings. I have to think about society.”

The article is here.

Would We Even Know Moral Bioenhancement If We Saw It?

Wiseman H.
Camb Q Healthc Ethics. 2017;26(3):398-410.

Abstract

The term "moral bioenhancement" conceals a diverse plurality encompassing much potential, some elements of which are desirable, some of which are disturbing, and some of which are simply bland. This article invites readers to take a better differentiated approach to discriminating between elements of the debate rather than talking of moral bioenhancement "per se," or coming to any global value judgments about the idea as an abstract whole (no such whole exists). Readers are then invited to consider the benefits and distortions that come from the usual dichotomies framing the various debates, concluding with an additional distinction for further clarifying this discourse qua explicit/implicit moral bioenhancement.

The article is here, behind a paywall.

Email the author directly for a personal copy.

Sunday, October 8, 2017

Moral outrage in the digital age

Molly J. Crockett
Nature Human Behaviour (2017)
Originally posted September 18, 2017

Moral outrage is an ancient emotion that is now widespread on digital media and online social networks. How might these new technologies change the expression of moral outrage and its social consequences?

Moral outrage is a powerful emotion that motivates people to shame and punish wrongdoers. Moralistic punishment can be a force for good, increasing cooperation by holding bad actors accountable. But punishment also has a dark side — it can exacerbate social conflict by dehumanizing others and escalating into destructive feuds.

Moral outrage is at least as old as civilization itself, but civilization is rapidly changing in the face of new technologies. Worldwide, more than a billion people now spend at least an hour a day on social media, and moral outrage is all the rage online. In recent years, viral online shaming has cost companies millions, candidates elections, and individuals their careers overnight.

As digital media infiltrates our social lives, it is crucial that we understand how this technology might transform the expression of moral outrage and its social consequences. Here, I describe a simple psychological framework for tackling this question (Fig. 1). Moral outrage is triggered by stimuli that call attention to moral norm violations. These stimuli evoke a range of emotional and behavioural responses that vary in their costs and constraints. Finally, expressing outrage leads to a variety of personal and social outcomes. This framework reveals that digital media may exacerbate the expression of moral outrage by inflating its triggering stimuli, reducing some of its costs and amplifying many of its personal benefits.

The article is here.

Saturday, October 7, 2017

Committee on Publication Ethics: Ethical Guidelines for Peer Reviewers

COPE Council.
Ethical guidelines for peer reviewers. 
September 2017. www.publicationethics.org

Peer reviewers play a role in ensuring the integrity of the scholarly record. The peer review
process depends to a large extent on the trust and willing participation of the scholarly
community and requires that everyone involved behaves responsibly and ethically. Peer
reviewers play a central and critical part in the peer review process, but may come to the role
without any guidance and be unaware of their ethical obligations. Journals have an obligation
to provide transparent policies for peer review, and reviewers have an obligation to conduct
reviews in an ethical and accountable manner. Clear communication between the journal
and the reviewers is essential to facilitate consistent, fair and timely review. COPE has heard
cases from its members related to peer review issues and bases these guidelines, in part, on
the collective experience and wisdom of the COPE Forum participants. It is hoped they will
provide helpful guidance to researchers, be a reference for editors and publishers in guiding
their reviewers, and act as an educational resource for institutions in training their students
and researchers.

Peer review, for the purposes of these guidelines, refers to reviews provided on manuscript
submissions to journals, but can also include reviews for other platforms and apply to public
commenting that can occur pre- or post-publication. Reviews of other materials such as
preprints, grants, books, conference proceeding submissions, registered reports (preregistered
protocols), or data will have a similar underlying ethical framework, but the process
will vary depending on the source material and the type of review requested. The model of
peer review will also influence elements of the process.

The guidelines are here.

Trump Administration Rolls Back Birth Control Mandate

Robert Pear, Rebecca R. Ruiz, and Laurie Godstein
The New York Times
Originally published October 6, 2017

The Trump administration on Friday moved to expand the rights of employers to deny women insurance coverage for contraception and issued sweeping guidance on religious freedom that critics said could also erode civil rights protections for lesbian, gay, bisexual and transgender people.

The twin actions, by the Department of Health and Human Services and the Justice Department, were meant to carry out a promise issued by President Trump five months ago, when he declared in the Rose Garden that “we will not allow people of faith to be targeted, bullied or silenced anymore.”

Attorney General Jeff Sessions quoted those words in issuing guidance to federal agencies and prosecutors, instructing them to take the position in court that workers, employers and organizations may claim broad exemptions from nondiscrimination laws on the basis of religious objections.

At the same time, the Department of Health and Human Services issued two rules rolling back a federal requirement that employers must include birth control coverage in their health insurance plans. The rules offer an exemption to any employer that objects to covering contraception services on the basis of sincerely held religious beliefs or moral convictions.

More than 55 million women have access to birth control without co-payments because of the contraceptive coverage mandate, according to a study commissioned by the Obama administration. Under the new regulations, hundreds of thousands of women could lose those benefits.

The article is here.

Italics added.  And, just when the abortion rate was at pre-1973 levels.

Friday, October 6, 2017

AI Research Is in Desperate Need of an Ethical Watchdog

Sophia Chen
Wired Science
Originally published September 18, 2017

About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.” Kosinski has received e-mail death threats.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data & Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go.

Right now, if government-funded scientists want to research humans for a study, the law requires them to get the approval of an ethics committee known as an institutional review board, or IRB. Stanford’s review board approved Kosinski and Wang’s study. But these boards use rules developed 40 years ago for protecting people during real-life interactions, such as drawing blood or conducting interviews. “The regulations were designed for a very specific type of research harm and a specific set of research methods that simply don’t hold for data science,” says Metcalf.

The article is here.

Lawsuit Over a Suicide Points to a Risk of Antidepressants

Roni Caryn Rabin
The New York Times
Originally published September 11, 2017

Here is an excerpt:

The case is a rare instance in which a lawsuit over a suicide involving antidepressants actually went to trial; many such cases are either dismissed or settled out of court, said Brent Wisner, of the law firm Baum Hedlund Aristei Goldman, which represented Ms. Dolin.

The verdict is also unusual because Glaxo, which has asked the court to overturn the verdict or to grant a new trial, no longer sells Paxil in the United States and did not manufacture the generic form of the medication Mr. Dolin was taking. The company argues that it should not be held liable for a pill it did not make.

Concerns about safety have long dogged antidepressants, though many doctors and patients consider the medications lifesavers.

Ever since they were linked to an increase in suicidal behaviors in young people more than a decade ago, all antidepressants, including Paxil, have carried a “black box” warning label, reviewed and approved by the Food and Drug Administration, saying that they increase the risk of suicidal thinking and behavior in children, teens and young adults under age 25.

The warning labels also stipulate that the suicide risk has not been seen in short-term studies in anyone over age 24, but urges close monitoring of all patients initiating drug treatment.

The article is here.

Thursday, October 5, 2017

Leadership Takes Self-Control. Here’s What We Know About It

Kai Chi (Sam) Yam, Huiwen Lian, D. Lance Ferris, Douglas Brown
Harvard Business Review
Originally published June 5, 2017

Here is an excerpt:

Our review identified a few consequences that are consistently linked to having lower self-control at work:
  1. Increased unethical/deviant behavior: Studies have found that when self-control resources are low, nurses are more likely to be rude to patients, tax accountants are more likely to engage in fraud, and employees in general engage in various forms of unethical behavior, such as lying to their supervisors, stealing office supplies, and so on.
  2. Decreased prosocial behavior: Depleted self-control makes employees less likely to speak up if they see problems at work, less likely to help fellow employees, and less likely to engage in corporate volunteerism.
  3. Reduced job performance: Lower self-control can lead employees to spend less time on difficult tasks, exert less effort at work, be more distracted (e.g., surfing the internet in working time), and generally perform worse than they would had their self-control been normal.
  4. Negative leadership styles: Perhaps what’s most concerning is that leaders with lower self-control often exhibit counter-productive leadership styles. They are more likely to verbally abuse their followers (rather than using positive means to motivate them), more likely to build weak relationships with their followers, and they are less charismatic. Scholars have estimated that the cost to corporations in the United States for such a negative and abusive behavior is at $23.8 billion annually.
Our review makes clear that helping employees maintain self-control is an important task if organizations want to be more effective and ethical. Fortunately, we identified three key factors that can help leaders foster self-control among employees and mitigate the negative effects of losing self-control.

The article is here.

Biased Algorithms Are Everywhere, and No One Seems to Care

Will Knight
MIT News
Originally published July 12, 2017

Here is an excerpt:

Algorithmic bias is shaping up to be a major societal issue at a critical moment in the evolution of machine learning and AI. If the bias lurking inside the algorithms that make ever-more-important decisions goes unrecognized and unchecked, it could have serious negative consequences, especially for poorer communities and minorities. The eventual outcry might also stymie the progress of an incredibly useful technology (see “Inspecting Algorithms for Bias”).

Algorithms that may conceal hidden biases are already routinely used to make vital financial and legal decisions. Proprietary algorithms are used to decide, for instance, who gets a job interview, who gets granted parole, and who gets a loan.

The founders of the new AI Now Initiative, Kate Crawford, a researcher at Microsoft, and Meredith Whittaker, a researcher at Google, say bias may exist in all sorts of services and products.

“It’s still early days for understanding algorithmic bias,” Crawford and Whittaker said in an e-mail. “Just this year we’ve seen more systems that have issues, and these are just the ones that have been investigated.”

Examples of algorithmic bias that have come to light lately, they say, include flawed and misrepresentative systems used to rank teachers, and gender-biased models for natural language processing.

The article is here.

Wednesday, October 4, 2017

Better Minds, Better Morals: A Procedural Guide to Better Judgment

G. Owen Schaefer and Julian Savulescu
Journal of Posthuman Studies
Vol. 1, No. 1, Journal of Posthuman Studies (2017), pp. 26-43

Abstract:

Making more moral decisions – an uncontroversial goal, if ever there was one. But how to go about it? In this article, we offer a practical guide on ways to promote good judgment in our personal and professional lives. We will do this not by outlining what the good life consists in or which values we should accept. Rather, we offer a theory of  procedural reliability : a set of dimensions of thought that are generally conducive to good moral reasoning. At the end of the day, we all have to decide for ourselves what is good and bad, right and wrong. The best way to ensure we make the right choices is to ensure the procedures we’re employing are sound and reliable. We identify four broad categories of judgment to be targeted – cognitive, self-management, motivational and interpersonal. Specific factors within each category are further delineated, with a total of 14 factors to be discussed. For each, we will go through the reasons it generally leads to more morally reliable decision-making, how various thinkers have historically addressed the topic, and the insights of recent research that can offer new ways to promote good reasoning. The result is a wide-ranging survey that contains practical advice on how to make better choices. Finally, we relate this to the project of transhumanism and prudential decision-making. We argue that transhumans will employ better moral procedures like these. We also argue that the same virtues will enable us to take better control of our own lives, enhancing our responsibility and enabling us to lead better lives from the prudential perspective.

A copy of the article is here.

Google Sets Limits on Addiction Treatment Ads, Citing Safety

Michael Corkery
The New York Times
Originally published September 14, 2017

As drug addiction soars in the United States, a booming business of rehab centers has sprung up to treat the problem. And when drug addicts and their families search for help, they often turn to Google.

But prosecutors and health advocates have warned that many online searches are leading addicts to click on ads for rehab centers that are unfit to help them or, in some cases, endangering their lives.

This week, Google acknowledged the problem — and started restricting ads that come up when someone searches for addiction treatment on its site. “We found a number of misleading experiences among rehabilitation treatment centers that led to our decision,” Google spokeswoman Elisa Greene said in a statement on Thursday.

Google has taken similar steps to restrict advertisements only a few times before. Last year it limited ads for payday lenders, and in the past it created a verification system for locksmiths to prevent fraud.

In this case, the restrictions will limit a popular marketing tool in the $35 billion addiction treatment business, affecting thousands of small-time operators.

The article is here.

Tuesday, October 3, 2017

VA About To Scrap Ethics Law That Helps Safeguards Veterans From Predatory For-Profit Colleges

Adam Linehan
Task and Purpose
Originally posted October 2, 2017

An ethics law that prohibits Department of Veterans Affairs employees from receiving money or owning a stake in for-profit colleges that rake in millions in G.I. Bill tuition has “illogical and unintended consequences,” according to VA, which is pushing to suspend the 50-year-old statute.

But veteran advocacy groups say suspending the law would make it easier for the for-profit education industry to exploit its biggest cash cow: veterans. 

In a proposal published in the Federal Register on Sept. 14, VA claims that the statute — which, according to The New York Times, was enacted following a string of scandals involving the for-profit education industry — is redundant due to the other conflict-of-interest laws that apply to all federal employees and provide sufficient safeguards.

Critics of the proposal, however, say that the statute provides additional regulations that protect against abuse and provide more transparency. 

“The statute is one of many important bipartisan reforms Congress implemented to protect G.I. Bill benefits from waste, fraud, and abuse,” William Hubbard, Student Veterans of America’s vice president of government affairs, said in an email to Task & Purpose. “A thoughtful and robust public conservation should be had to ensure that the interests of student veterans is the top of the priority list.”

The article is here.

Editor's Note: The swamp continues to grow under the current administration.

Facts Don’t Change People’s Minds. Here’s What Does

Ozan Varol
Helio
Originally posted September 6, 2017

Here is an excerpt:

The mind doesn’t follow the facts. Facts, as John Adams put it, are stubborn things, but our minds are even more stubborn. Doubt isn’t always resolved in the face of facts for even the most enlightened among us, however credible and convincing those facts might be.

As a result of the well-documented confirmation bias, we tend to undervalue evidence that contradicts our beliefs and overvalue evidence that confirms them. We filter out inconvenient truths and arguments on the opposing side. As a result, our opinions solidify, and it becomes increasingly harder to disrupt established patterns of thinking.

We believe in alternative facts if they support our pre-existing beliefs. Aggressively mediocre corporate executives remain in office because we interpret the evidence to confirm the accuracy of our initial hiring decision. Doctors continue to preach the ills of dietary fat despite emerging research to the contrary.

If you have any doubts about the power of the confirmation bias, think back to the last time you Googled a question. Did you meticulously read each link to get a broad objective picture? Or did you simply skim through the links looking for the page that confirms what you already believed was true? And let’s face it, you’ll always find that page, especially if you’re willing to click through to Page 12 on the Google search results.

The article is here.