Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, philosophy and health care

Friday, March 23, 2018

Mark Zuckerberg Has No Way Out of Facebook's Quagmire

Leonid Bershidsky
Bloomberg News
Originally posted March 21, 2018

Here is an excerpt:

"Making sure time spent on Facebook is time well spent," as Zuckerberg puts it, should lead to the collection of better-quality data. If nobody is setting up fake accounts to spread disinformation, users are more likely to be their normal selves. Anyone analyzing these healthier interactions will likely have more success in targeting commercial and, yes, political offerings to real people. This would inevitably be a smaller yet still profitable enterprise, and no longer a growing one, at least in the short term. But the Cambridge Analytica scandal shows people may not be okay with Facebook's data gathering, improved or not.

The scandal follows the revelation (to most Facebook users who read about it) that, until 2015, application developers on the social network's platform were able to get information about a user's Facebook friends after asking permission in the most perfunctory way. The 2012 Obama campaign used this functionality. So -- though in a more underhanded way -- did Cambridge Analytica, which may or may not have used the data to help elect President Donald Trump.

Many people are angry at Facebook for not acting more resolutely to prevent CA's abuse, but if that were the whole problem, it would have been enough for Zuckerberg to apologize and point out that the offending functionality hasn't been available for several years. The #deletefacebook campaign -- now backed by WhatsApp co-founder Brian Acton, whom Facebook made a billionaire -- is, however, powered by a bigger problem than that. People are worried about the data Facebook is accumulating about them and about how these data are used. Facebook itself works with political campaigns to help them target messages; it did so for the Trump campaign, too, perhaps helping it more than CA did.

The article is here.

First Question: Should you stop using Facebook because they violated your trust?

Second Question: Is Facebook a defective product?

Facebook Woes: Data Breach, Securities Fraud, or Something Else?

Matt Levine
Originally posted March 21, 2018

Here is an excerpt:

But the result is always "securities fraud," whatever the nature of the underlying input. An undisclosed data breach is securities fraud, but an undisclosed sexual-harassment problem or chicken-mispricing conspiracy will get you to the same place. There is an important practical benefit to a legal regime that works like this: It makes it easy to punish bad behavior, at least by public companies, because every sort of bad behavior is also securities fraud. You don't have to prove that the underlying chicken-mispricing conspiracy was illegal, or that the data breach was due to bad security procedures. All you have to prove is that it happened, and it wasn't disclosed, and the stock went down when it was. The evaluation of the badness is in a sense outsourced to the market: We know that the behavior was illegal, not because there was a clear law against it, but because the stock went down. Securities law is an all-purpose tool for punishing corporate badness, a one-size-fits-all approach that makes all badness commensurable using the metric of stock price. It has a certain efficiency.

On the other hand it sometimes makes me a little uneasy that so much of our law ends up working this way. "In a world of dysfunctional government and pervasive financial capitalism," I once wrote, "more and more of our politics is contested in the form of securities regulation." And: "Our government's duty to its citizens is mediated by their ownership of our public companies." When you punish bad stuff because it is bad for shareholders, you are making a certain judgment about what sort of stuff is bad and who is entitled to be protected from it.

Anyway Facebook Inc. wants to make it very clear that it did not suffer a data breach. When a researcher got data about millions of Facebook users without those users' explicit permission, and when the researcher turned that data over to Cambridge Analytica for political targeting in violation of Facebook's terms, none of that was a data breach. Facebook wasn't hacked. What happened was somewhere between a contractual violation and ... you know ... just how Facebook works? There is some splitting of hairs over this, and you can understand why -- consider that SEC guidance about when companies have to disclose data breaches -- but in another sense it just doesn't matter. You don't need to know whether the thing was a "data breach" to know how bad it was. You can just look at the stock price. The stock went down...

The article is here.

Thursday, March 22, 2018

The Ethical Design of Intelligent Robots

Sunidhi Ramesh
The Neuroethics Blog
Originally published February 27, 2018

Here is an excerpt:

In a 2016 study, a team of Georgia Tech scholars formulated a simulation in which 26 volunteers interacted “with a robot in a non-emergency task to experience its behavior and then [chose] whether [or not] to follow the robot’s instructions in an emergency.” To the researchers’ surprise (and unease), in this “emergency” situation (complete with artificial smoke and fire alarms), “all [of the] participants followed the robot in the emergency, despite half observing the same robot perform poorly [making errors by spinning, etc.] in a navigation guidance task just minutes before… even when the robot pointed to a dark room with no discernible exit, the majority of people did not choose to safely exit the way they entered.” It seems that we not only trust robots, but we also do so almost blindly.

The investigators proceeded to label this tendency as a concerning and alarming display of overtrust of robots—an overtrust that applied even to robots that showed indications of not being trustworthy.

Not convinced? Let’s consider the recent Tesla self-driving car crashes. How, you may ask, could a self-driving car barrel into parked vehicles when the driver is still able to override the autopilot machinery and manually stop the vehicle in seemingly dangerous situations? Yet, these accidents have happened. Numerous times.

The answer may, again, lie in overtrust. “My Tesla knows when to stop,” such a driver may think. Yet, as the car lurches uncomfortably into a position that would push the rest of us to slam onto our breaks, a driver in a self-driving car (and an unknowing victim of this overtrust) still has faith in the technology.

“My Tesla knows when to stop.” Until it doesn’t. And it’s too late.

Have our tribes become more important than our country?

Jonathan Rauch
The Washington Post
Originally published February 16, 2018

Here is an excerpt:

Moreover, tribalism is a dynamic force, not a static one. It exacerbates itself by making every group feel endangered by the others, inducing all to circle their wagons still more tightly. “Today, no group in America feels comfortably dominant,” Chua writes. “The Left believes that right-wing tribalism — bigotry, racism — is tearing the country apart. The Right believes that left-wing tribalism — identity politics, political correctness — is tearing the country apart. They are both right.” I wish I could disagree.

Remedies? Chua sees hopeful signs. Psychological research shows that tribalism can be countered and overcome by teamwork: by projects that join individuals in a common task on an equal footing. One such task, it turns out, can be to reduce tribalism. In other words, with conscious effort, humans can break the tribal spiral, and many are trying. “You’d never know it from cable news or social media,” Chua writes, “but all over the country there are signs of people trying to cross divides and break out of their political tribes.”

She lists examples, and I can add my own. My involvement with the Better Angels project, a grass-roots depolarization movement that is gaining traction in communities across the country, has convinced me that millions of Americans are hungry for conciliation and willing to work for it. Last summer, at a Better Angels workshop in Virginia, I watched as eight Trump supporters and eight Hillary Clinton supporters participated in a day of structured interactions.

The article is here.

Wednesday, March 21, 2018

Stop Posturing and Start Problem Solving: A Call for Research to Prevent Gun Violence

Kelsey Hills-Evans, Julian Mitton, and Chana Sacks
AMA Journal of Ethics. January 2018, Volume 20, Number 1: 77-83.
doi: 10.1001/journalofethics.2018.20.01.pfor1-1801.


Gun violence is a major cause of preventable injury and death in the United States, leading to more than 33,000 deaths each year. However, gun violence prevention is an understudied and underfunded area of research. We review the barriers to research in the field, including restrictions on federal funding. We then outline potential areas in which further research could inform clinical practice, public health efforts, and public policy. We also review examples of innovative collaborations among interdisciplinary teams working to develop strategies to integrate gun violence prevention into patient-doctor interactions in order to interrupt the cycle of gun violence.

An Ethical Obligation to Address Gun Violence

More than twenty survivors of the Pulse nightclub massacre traveled together to Boston, Massachusetts, in the days before the one-year anniversary of that horrific night. They met with a group of physicians, nurses, social workers, administrators, and others at our hospital to talk about their experience. They recounted their memories of the sounds of gunfire, the screams of those around them, and the moans from those felled beside them. They described the ups and downs that have characterized their attempts to rebuild in the year since gunfire shattered their sense of normalcy. They shared their stories in the hopes that if more people could understand what it means to be affected by gun violence, then we, as a nation, would be compelled to act.

The article is here.

Suicidal Ideation, Plans, and Attempts Among Public Safety Personnel in Canada

R. N. Carleton and others
Canadian Psychology
First published February 8, 2018


Substantial media attention has focused on suicide among Canadian Public Safety Personnel (PSP; e.g., correctional workers, dispatchers, firefighters, paramedics, police). The attention has raised significant concerns about the mental health impact of public safety service, as well as interest in the correlates for risk of suicide. There have only been two published studies assessing lifetime suicidal behaviors among Canadian PSP. The current study was designed to assess past-year and lifetime suicidal ideation, plans, and attempts amongst a large diverse sample of Canadian PSP. Estimates of suicidal ideation, plans, and attempts were derived from self-reported data from a nationally administered online survey. Participants included 5,148 PSP (33.4% women) grouped into six categories (i.e., Call Centre Operators/Dispatchers, Correctional Workers, Firefighters, Municipal/Provincial Police, Paramedics, Royal Canadian Mounted Police). Substantial proportions of participants reported past-year and lifetime suicidal ideation (10.1%, 27.8%), planning (4.1%, 13.3%), or attempts (0.4%, 4.6%). Women reported significantly more lifetime suicidal behaviors than men (ORs = 1.15 to 2.62). Significant differences were identified across PSP categories in reports of past-year and lifetime suicidal behaviors. The proportion of Canadian PSP reporting past-year and lifetime suicidal behaviors was substantial. The estimates for lifetime suicidal behaviors appear consistent with or higher than previously published international PSP estimates, and higher than reports from the general population. Municipal/Provincial Police reported the lowest frequency for past-year and lifetime suicidal behaviors, whereas Correctional Workers and Paramedics reported the highest. The results provide initial evidence that substantial portions of diverse Canadian PSP experience suicidal behaviors, therein warranting additional resources and research.

The research is here.

Tuesday, March 20, 2018

Why Partisanship Is Such a Worthy Foe of Objective Truth

Charlotte Hu
Discover Magazine
Originally published February 20, 2018

Here is an excerpt:

Take, for example, an experiment that demonstrated party affiliation affected people’s perception of a protest video. When participants felt the video depicted liberally minded protesters, Republicans were more in favor of police intervention than Democrats. The opposite emerged when participants thought the video showed a conservative protest. The visual information was identical, but people drew vastly different conclusions that were shaded by their political group affiliation.

“People are more likely to behave in and experience emotions in ways that are congruent with the activated social identity,” says Bavel. In other words, people will go along with the group, even if the ideas oppose their own ideologies—belonging may have more value than facts.

The situation is extenuated by social media, which creates echo chambers on both the left and the right. In these concentric social networks, the same news articles are circulated, validating the beliefs of the group and strengthening their identity association with the group.

The article is here.

The Psychology of Clinical Decision Making: Implications for Medication Use

Jerry Avorn
February 22, 2018
N Engl J Med 2018; 378:689-691

Here is an excerpt:

The key problem is medicine’s ongoing assumption that clinicians and patients are, in general, rational decision makers. In reality, we are all influenced by seemingly irrational preferences in making choices about reward, risk, time, and trade-offs that are quite different from what would be predicted by bloodless, if precise, quantitative calculations. Although we physicians sometimes resist the syllogism, if all humans are prone to irrational decision making, and all clinicians are human, then these insights must have important implications for patient care and health policy. There have been some isolated innovative applications of that understanding in medicine, but despite a growing number of publications about the psychology of decision making, most medical care — at the bedside and the systems level — is still based on a “rational actor” understanding of how we make decisions.

The choices we make about prescription drugs provide one example of how much further medicine could go in taking advantage of a more nuanced understanding of decision making under conditions of uncertainty — a description that could define the profession itself. We persist in assuming that clinicians can obtain comprehensive information about the comparative worth (clinical as well as economic) of alternative drug choices for a given condition, assimilate and evaluate all the findings, and synthesize them to make the best drug choices for our patients. Leaving aside the access problem — the necessary comparative effectiveness research often doesn’t exist — actual drug-utilization data make it clear that real-world prescribing choices are in fact based heavily on various “irrational” biases, many of which have been described by behavioral economists and other decision theorists.

The article is here.

Monday, March 19, 2018

‘The New Paradigm,’ Conscience and the Death of Catholic Morality

E. Christian Brugger
National Catholic Register
Originally published February 23, 2-18

Vatican Secretary of State Cardinal Pietro Parolin, in a recent interview with Vatican News, contends the controversial reasoning expressed in the apostolic exhortation Amoris Laetitia (The Joy of Love) represents a “paradigm shift” in the Church’s reasoning, a “new approach,” arising from a “new spirit,” which the Church needs to carry out “the process of applying the directives of Amoris Laetitia.”

His reference to a “new paradigm” is murky. But its meaning is not. Among other things, he is referring to a new account of conscience that exalts the subjectivity of the process of decision-making to a degree that relativizes the objectivity of the moral law. To understand this account, we might first look at a favored maxim of Pope Francis: “Reality is greater than ideas.”

It admits no single-dimensional interpretation, which is no doubt why it’s attractive to the “Pope of Paradoxes.” But in one area, the arena of doctrine and praxis, a clear meaning has emerged. Dogma and doctrine constitute ideas, while praxis (i.e., the concrete lived experience of people) is reality: “Ideas — conceptual elaborations — are at the service of … praxis” (Evangelii Gaudium, 232).

In relation to the controversy stirred by Amoris Laetitia, “ideas” is interpreted to mean Church doctrine on thorny moral issues such as, but not only, Communion for the divorced and civilly remarried, and “reality” is interpreted to mean the concrete circumstances and decision-making of ordinary Catholics.

The article is here.

#MeToo in Medicine: Waiting for the Reckoning

Elizabeth Chuck
NBC News
Originally posted February 21, 2018

Here is an excerpt:

Health care organizations make clear that they do not condone inappropriate behavior. The American Medical Association calls workplace sexual harassment unethical and specifically states in its Code of Medical Ethics that “Sexual relationships between medical supervisors and trainees are not acceptable, even if consensual.”

Westchester Medical Center Health Network, where Jenkins says she was sexually harassed as a resident, maintains that it has never tolerated workplace harassment. In a statement to NBC News, it said that the surgeon in question "has not worked at Westchester Medical Center for years and we have no record of a report."

"Our policies on harassment are strict, clear and presented to all employees consistently," it said.

"Mechanisms have been and continue to be in place to enable confidential reporting and allegations involving staff are investigated swiftly and thoroughly. Disciplinary actions are taken, as appropriate, after internal review," the statement said, adding that Westchester Medical Center's policies were "continuously examined and enhanced" and that reporting sexual harassment was encouraged through its confidential 24-hour hotline.

More than a hotline is needed, said many females in medicine, who want to see an overhaul of their entire profession — with men made aware of what's unacceptable and women looking out for one another and supporting each other.

The article is here.

Sunday, March 18, 2018

Machine Theory of Mind

Neil C. Rabinowitz, F. Perbet, H. F. Song, C. Zhang, S.M. Ali Eslami, M. Botvinick
Artificial Intelligence
Submitted February 2018


Theory of mind (ToM; Premack & Woodruff, 1978) broadly refers to humans' ability to represent the mental states of others, including their desires, beliefs, and intentions. We propose to train a machine to build such models too. We design a Theory of Mind neural network -- a ToMnet -- which uses meta-learning to build models of the agents it encounters, from observations of their behaviour alone. Through this process, it acquires a strong prior model for agents' behaviour, as well as the ability to bootstrap to richer predictions about agents' characteristics and mental states using only a small number of behavioural observations. We apply the ToMnet to agents behaving in simple gridworld environments, showing that it learns to model random, algorithmic, and deep reinforcement learning agents from varied populations, and that it passes classic ToM tasks such as the "Sally-Anne" test (Wimmer & Perner, 1983; Baron-Cohen et al., 1985) of recognising that others can hold false beliefs about the world. We argue that this system -- which autonomously learns how to model other agents in its world -- is an important step forward for developing multi-agent AI systems, for building intermediating technology for machine-human interaction, and for advancing the progress on interpretable AI.

The research is here.

Saturday, March 17, 2018

The Revised Declaration of Geneva

Ramin Walter Parsa-Parsi
JAMA. 2017;318(20):1971-1972.

Here is an excerpt:

The most notable difference between the Declaration of Geneva and other key ethical documents, such as the WMA’s Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects and the Declaration of Taipei on Ethical Considerations Regarding Health Databases and Biobanks, was determined to be the lack of overt recognition of patient autonomy, despite references to the physician’s obligation to exercise respect, beneficence, and medical confidentiality toward his or her patient(s). To address this difference, the workgroup, informed by other WMA members, ethical advisors, and other experts, recommended adding the following clause: “I WILL RESPECT the autonomy and dignity of my patient.” In addition, to highlight the importance of patient self-determination as one of the key cornerstones of medical ethics, the workgroup also recommended shifting all new and existing paragraphs focused on patients’ rights to the beginning of the document, followed by clauses relating to other professional obligations.

To more explicitly invoke the standards of ethical and professional conduct expected of physicians by their patients and peers, the clause “I WILL PRACTISE my profession with conscience and dignity” was augmented to include the wording “and in accordance with good medical practice.”

The article and the Declaration can be found here.

Friday, March 16, 2018

Ethics watchdog files complaint alleging Trump lawyer's payment to Stormy Daniels violates law

Javier David
Originally published March 11, 2018

A $130,000 payment made by President Donald Trump's attorney to an adult film star should be probed as a financial obligation that the president "knowingly and willfully" failed to report, a watchdog has argued.

In a legal complaint filed late last week with the Department of Justice and the Office of Government Ethics, the Citizens for Responsibility and Ethics in Washington (CREW) argued that Trump lawyer Michael Cohen's payment to Stormy Daniels "constituted a loan to President Trump that he should have reported as a liability on his public financial disclosure."

The filing raised the question of whether, as a general election candidate, Trump deliberately failed to disclose it.

CREW's argument has been raised with increasing regularity by some legal experts, who say Cohen's surreptitious payment could be viewed as an illicit campaign contribution. The attorney disclosed recently that he used a home equity loan to arrange a payment to Daniels, buying her silence for an alleged affair she had with Trump more than a decade ago.

In CREW's judgement, Trump "seemingly violated a federal law by failing to disclose it" on his campaign filings. Experts have said that had Trump paid Daniels with his own money, the payment wouldn't be an issue since candidates can contribute to their own campaigns. Yet since a disclosure wasn't made, there could be a violation.

The information is here.

How Russia Hacked the American Mind

Maya Kosoff
Vanity Fair
Originally posted February 19, 2018

Here is an excerpt:

Social media certainly facilitated the Russian campaign. As part of Facebook’s charm offensive, Zuckerberg has since offered tangible fixes, including a plan to verify election advertisements and an effort to emphasize friends, family, and Groups. But Americans’ lack of news literacy transcends Facebook, and was created in part by the Internet itself. As news has shifted from print and television outlets to digital versions of those same outlets to information shared on social-media platforms (still the primary source of news for an overwhelming majority of Americans) audiences failed to keep pace; they never learned to vet the news they consume online.

It’s also a problem we’ve created ourselves. As we’ve become increasingly polarized, news outlets have correspondingly adjusted to cater to our tastes, resulting in a media landscape that’s split into separate, non-overlapping universes of conflicting facts—a world in which Fox News and CNN spout theories about the school shooting in Parkland, Florida, that are diametrically opposed. It was this atmosphere that made the U.S. fertile ground for foreign manipulation. As political scientists Jay J. Van Bavel and Andrea Pereira noted in a recent paper, “Partisanship can even alter memory, implicit evaluation, and even perceptual judgment,” fueling an “human attraction to fake and untrustworthy news” that “poses a serious problem for healthy democratic functioning.”

The article is here.

Thursday, March 15, 2018

Apple’s Move to Share Health Care Records Is a Game-Changer

Aneesh Chopra and Safiq Rab
Originally posted February 19, 2018

Here is an excerpt:

Naysayers point out the fact that Apple is currently displaying only a sliver of a consumer’s entire electronic health record. That is true, but it's largely on account of the limited information available via the open API standard. As with all standards efforts, the FHIR API will add more content, like scheduling slots and clinical notes, over time. Some of that work will be motivated by proposed federal government voluntary framework to expand the types of data that must be shared over time by certified systems, as noted in this draft approach out for public comment.

Imagine if Apple further opens up Apple Health so it no longer serves as the destination, but a conduit for a patient's longitudinal health record to a growing marketplace of applications that can help guide consumers through decisions to better manage their health.

Thankfully, the consumer data-sharing movement—placing the longitudinal health record in the hands of the patient and the applications they trust—is taking hold, albeit quietly. In just the past few weeks, a number of health systems that were initially slow to turn on the required APIs suddenly found the motivation to meet Apple's requirement.

The article is here.

Computing and Moral Responsibility

Noorman, Merel
The Stanford Encyclopedia of Philosophy (Spring 2018 Edition), Edward N. Zalta (ed.)

Traditionally philosophical discussions on moral responsibility have focused on the human components in moral action. Accounts of how to ascribe moral responsibility usually describe human agents performing actions that have well-defined, direct consequences. In today’s increasingly technological society, however, human activity cannot be properly understood without making reference to technological artifacts, which complicates the ascription of moral responsibility (Jonas 1984; Waelbers 2009). As we interact with and through these artifacts, they affect the decisions that we make and how we make them (Latour 1992). They persuade, facilitate and enable particular human cognitive processes, actions or attitudes, while constraining, discouraging and inhibiting others. For instance, internet search engines prioritize and present information in a particular order, thereby influencing what internet users get to see. As Verbeek points out, such technological artifacts are “active mediators” that “actively co-shape people’s being in the world: their perception and actions, experience and existence” (2006, p. 364). As active mediators, they change the character of human action and as a result it challenges conventional notions of moral responsibility (Jonas 1984; Johnson 2001).

Computing presents a particular case for understanding the role of technology in moral responsibility. As these technologies become a more integral part of daily activities, automate more decision-making processes and continue to transform the way people communicate and relate to each other, they further complicate the already problematic tasks of attributing moral responsibility. The growing pervasiveness of computer technologies in everyday life, the growing complexities of these technologies and the new possibilities that they provide raise new kinds of questions: who is responsible for the information published on the Internet? Who is responsible when a self-driving vehicle causes an accident? Who is accountable when electronic records are lost or when they contain errors? To what extent and for what period of time are developers of computer technologies accountable for untoward consequences of their products? And as computer technologies become more complex and behave increasingly autonomous can or should humans still be held responsible for the behavior of these technologies?

The entry is here.

Wednesday, March 14, 2018

Oxfam scandal is not about morality, but abuse of power

Kerry Boyd Anderson
Originally posted February 18, 2018

Here is an excerpt:

Two of these problems directly relate to the #metoo movement against sexual harassment and abuse. First, the Oxfam scandal is not about personal sexual immorality. It is about abuse of power and sexual exploitation. When these men entered a war zone or an area that had suffered a massive natural disaster, they were not dealing with women there on equal terms; they were in a position of power and relative wealth, and offered women in desperate circumstances money in exchange for sex. These women were part of the population the aid workers were supposed to be helping, so using them in this way constitutes a clear breach of trust. This is one of the #metoo movement’s key points — this type of behavior is not about personal morality, it is about abuse of power.

Another problem that the scandal highlights is the way that many organizations protect the men who are behaving badly. In the Oxfam case, the focus has been on one man in a leadership position: Roland van Hauwermeiren, who created an enabling environment and participated in the hiring of prostitutes. Van Hauwermeiren previously led a project team for the charity Merlin in Liberia, where a colleague reported that men on the team were hiring local women as prostitutes. After an internal investigation, he resigned. He later led Oxfam’s team in Chad, where similar accusations arose. Despite this, Oxfam put him in charge of a team in Haiti, where the behavior continued. Following an investigation, van Hauwermeiren resigned, but he then went on to work for Action Against Hunger in Bangladesh. 

Have some evangelicals embraced moral relativism?

Corey Fields
Baptist News Global
Originally posted February 16, 2018

Here is an excerpt:

The moral rot we’re seeing among white evangelicals has been hard to watch, and it did not start in 2016. Back in 2009, an article in the evangelical publication Christianity Today bemoaned a survey finding that 62 percent of white evangelicals support the use of torture. Despite a supposed pro-life stance, white evangelicals are also the most likely religious group to support war and the death penalty. Racism and sexual predation among elected officials are getting a pass if they deliver on policy. Charles Mathewes, a professor of religious studies at the University of Virginia, put it well: “For believers in a religion whose Scriptures teach compassion, we [white evangelicals] are a breathtakingly cruel bunch.”

Here’s a quote from a prominent evangelical author: “As it turns out, character does matter. You can’t run a family, let alone a country, without it. How foolish to believe that a person who lacks honesty and moral integrity is qualified to lead a nation and the world!” That was written by James Dobson of Focus on the Family. But he wasn’t talking about Donald Trump. He wrote that about Bill Clinton in 1998. Is this principle no longer in force, or does it only apply to Democrats?

As Robert P. Jones noted, the ends apparently justify the means. “White evangelicals have now fully embraced a consequentialist ethics that works backward from predetermined political ends, refashioning or even discarding principles as needed to achieve a desired outcome.” That’s moral relativism.

The article is here.

Tuesday, March 13, 2018

Cognitive Ability and Vulnerability to Fake News

David Z. Hambrick and Madeline Marquardt
Scientific American
Originally posted on February 6, 2018

“Fake news” is Donald Trump’s favorite catchphrase. Since the election, it has appeared in some 180 tweets by the President, decrying everything from accusations of sexual assault against him to the Russian collusion investigation to reports that he watches up to eight hours of television a day. Trump may just use “fake news” as a rhetorical device to discredit stories he doesn’t like, but there is evidence that real fake news is a serious problem. As one alarming example, an analysis by the internet media company Buzzfeed revealed that during the final three months of the 2016 U.S. presidential campaign, the 20 most popular false election stories generated around 1.3 million more Facebook engagements—shares, reactions, and comments—than did the 20 most popular legitimate stories. The most popular fake story was “Pope Francis Shocks World, Endorses Donald Trump for President.”

Fake news can distort people’s beliefs even after being debunked. For example, repeated over and over, a story such as the one about the Pope endorsing Trump can create a glow around a political candidate that persists long after the story is exposed as fake. A study recently published in the journal Intelligence suggests that some people may have an especially difficult time rejecting misinformation.

The article is here.

Doctors In Maine Say Halt In OxyContin Marketing Comes '20 Years Late'

Patty Wight
Originally posted February 13, 2018

The maker of OxyContin, one of the most prescribed and aggressively marketed opioid painkillers, will no longer tout the drug or any other opioids to doctors.

The announcement, made Saturday, came as drugmaker Purdue Pharma faces lawsuits for deceptive marketing brought by cities and counties across the U.S., including several in Maine. The company said it's cutting its U.S. sales force by more than half.

Just how important are these steps against the backdrop of a raging opioid epidemic that took the lives of more than 300 Maine residents in 2016, and accounted for more than 42,000 deaths nationwide?

"They're 20 years late to the game," says Dr. Noah Nesin, a family physician and vice president of medical affairs at Penobscot Community Health Care.

Nesin says even after Purdue Pharma paid $600 million in fines about a decade ago for misleading doctors and regulators about the risks opioids posed for addiction and abuse, it continued marketing them.

The article is here.

Monday, March 12, 2018

Train PhD students to be thinkers not just specialists

Gundula Bosch
Originally posted February 14, 2018

Under pressure to turn out productive lab members quickly, many PhD programmes in the biomedical sciences have shortened their courses, squeezing out opportunities for putting research into its wider context. Consequently, most PhD curricula are unlikely to nurture the big thinkers and creative problem-solvers that society needs.

That means students are taught every detail of a microbe’s life cycle but little about the life scientific. They need to be taught to recognize how errors can occur. Trainees should evaluate case studies derived from flawed real research, or use interdisciplinary detective games to find logical fallacies in the literature. Above all, students must be shown the scientific process as it is — with its limitations and potential pitfalls as well as its fun side, such as serendipitous discoveries and hilarious blunders.

This is exactly the gap that I am trying to fill at Johns Hopkins University in Baltimore, Maryland, where a new graduate science programme is entering its second year. Microbiologist Arturo Casadevall and I began pushing for reform in early 2015, citing the need to put the philosophy back into the doctorate of philosophy: that is, the ‘Ph’ back into the PhD.

The article is here.

The tech bias: why Silicon Valley needs social theory

Jan Bier
Originally posted February 14, 2018

Here is an excerpt:

That Google memo is an extreme example of an imbalance in how different ways of knowing are valued. Silicon Valley tech companies draw on innovative technical theory but have yet to really incorporate advances in social theory. The inattention to such knowledge becomes all too apparent when algorithms fail in their real-life applications – from automated soap-dispensers that fail to turn on when a user has dark brown skin, to the new iPhone X’s inability to distinguish among different Asian women.

Social theorists in fields such as sociology, geography, and science and technology studies have shown how race, gender and class biases inform technical design. So there’s irony in the fact that employees hold sexist and racist attitudes, yet ‘we are supposed to believe that these same employees are developing “neutral” or “objective” decision-making tools’, as the communications scholar Safiya Umoja Noble at the University of Southern California argues in her book Algorithms of Oppression (2018).

In many cases, what’s eroding the value of social knowledge is unintentional bias – on display when prominent advocates for equality in science and tech undervalue research in the social sciences. The physicist Neil DeGrasse Tyson, for example, has downplayed the link between sexism and under-representation in science. Apparently, he’s happy to ignore extensive research pointing out that the natural sciences’ male-dominated institutional cultures are a major cause of the attrition of female scientists at all stages of their careers.

The article is here.

Sunday, March 11, 2018

Cognitive Bias in Forensic Mental Health Assessment: Evaluator Beliefs About Its Nature and Scope

Zapf, P. A., Kukucka, J., Kassin, S. M., & Dror, I. E.
Psychology, Public Policy, & Law


Decision-making of mental health professionals is influenced by irrelevant information (e.g., Murrie, Boccaccini, Guarnera, & Rufino, 2013). However, the extent to which mental health evaluators acknowledge the existence of bias, recognize it, and understand the need to guard against it, is unknown. To formally assess beliefs about the scope and nature of cognitive bias, we surveyed 1,099 mental health professionals who conduct forensic evaluations for the courts or other tribunals (and compared these results with a companion survey of 403 forensic examiners, reported in Kukucka, Kassin, Zapf, & Dror, 2017). Most evaluators expressed concern over cognitive bias but held an incorrect view that mere willpower can reduce bias. Evidence was also found for a bias blind spot (Pronin, Lin, & Ross, 2002), with more evaluators acknowledging bias in their peers’ judgments than in their own. Evaluators who had received training about bias were more likely to acknowledge cognitive bias as a cause for concern, whereas evaluators with more experience were less likely to acknowledge cognitive bias as a cause for concern in forensic evaluation as well as in their own judgments. Training efforts should highlight the bias blind spot and the fallibility of introspection or conscious effort as a means of reducing bias. In addition, policies and procedural guidance should be developed in regard to best cognitive practices in forensic evaluations.

Closing statements:

What is clear is that forensic evaluators appear to be aware of the issue of bias in general, but diminishing rates of perceived susceptibility to bias in one’s own judgments and the perception of higher rates of bias in the judgments of others as compared with oneself, underscore that we may not be the most objective evaluators of our own decisions. As with the forensic sciences, implementing procedures and strategies to minimize the impact of bias in forensic evaluation can serve to proactively mitigate against the intrusion of irrelevant information in forensic decision making. This is especially important given the courts’ heavy reliance on evaluators’ opinions (see Zapf, Hubbard, Cooper, Wheeles, & Ronan, 2004), the fact that judges and juries have little choice but to trust the expert’s self-assessment of bias (see Kassin et al., 2013), and the potential for biased opinions and conclusions to cross-contaminate other evidence or testimony (see Dror, Morgan, Rando, & Nakhaeizadeh, 2017). More research is necessary to determine the specific strategies to be used and the various recommended means of implementing those strategies across forensic evaluations, but the time appears to be ripe for further discussion and development of policies and guidelines to acknowledge and attempt to reduce the potential impact of bias in forensic evaluation.

The article is here.

Saturday, March 10, 2018

What swamp? Lobbyists get ethics waivers to work for Trump

Associated Press
Originally posted March 9, 2017

President Donald Trump and his appointees have stocked federal agencies with ex-lobbyists and corporate lawyers who now help regulate the very industries from which they previously collected paychecks, despite promising as a candidate to drain the swamp in Washington.

A week after his January 2017 inauguration, Trump signed an executive order that bars former lobbyists, lawyers and others from participating in any matter they lobbied or otherwise worked on for private clients within two years before going to work for the government.

But records reviewed by The Associated Press show Trump's top lawyer, White House counsel Don McGahn, has issued at least 24 ethics waivers to key administration officials at the White House and executive branch agencies.

Though the waivers were typically signed by McGahn months ago, the Office of Government Ethics disclosed several more on Wednesday.

One allows FBI Director Chris Wray "to participate in matters involving a confidential former client." The three-sentence waiver gives no indication about what Wray's conflict of interest might be or how it may violate Trump's ethics order.

Asked about the waivers, Lindsay Walters, a White House spokeswoman, said, "In the interests of full transparency and good governance, the posted waivers set forth the policy reasons for granting an exception to the pledge."

The article is here.

Universities Rush to Roll Out Computer Science Ethics Courses

Natasha Singer
The New York Times
Originally posted February 12, 2018

Here is an excerpt:

“Technology is not neutral,” said Professor Sahami, who formerly worked at Google as a senior research scientist. “The choices that get made in building technology then have social ramifications.”

The courses are emerging at a moment when big tech companies have been struggling to handle the side effects — fake news on Facebook, fake followers on Twitter, lewd children’s videos on YouTube — of the industry’s build-it-first mind-set. They amount to an open challenge to a common Silicon Valley attitude that has generally dismissed ethics as a hindrance.

“We need to at least teach people that there’s a dark side to the idea that you should move fast and break things,” said Laura Norén, a postdoctoral fellow at the Center for Data Science at New York University who began teaching a new data science ethics course this semester. “You can patch the software, but you can’t patch a person if you, you know, damage someone’s reputation.”

Computer science programs are required to make sure students have an understanding of ethical issues related to computing in order to be accredited by ABET, a global accreditation group for university science and engineering programs. Some computer science departments have folded the topic into a broader class, and others have stand-alone courses.

But until recently, ethics did not seem relevant to many students.

The article is here.

Friday, March 9, 2018

Dealing with Racist Patients

Kimani Paul-Emile, Alexander K. Smith, Bernard Lo, and Alicia Fernández
N Engl J Med 2016; 374:708-711

Here is an excerpt:

Beyond these general legal rules, when patients reject physicians on the basis of their race or ethnic background, there is little guidance for hospitals and physicians regarding ways of effectively balancing patients’ interests, medical personnel’s employment rights, and the duty to treat. We believe that sound decision making in this context will turn on five ethical and practical factors: the patient’s medical condition, his or her decision-making capacity, options for responding to the request, reasons for the request, and effect on the physician (see flow chart). It’s helpful for physicians to consider these factors as they engage in negotiation, persuasion, and (in some cases) accommodation within the practical realities of providing effective care for all patients.

The patient’s medical condition and the clinical setting should drive decision making. In an emergency situation with a patient whose condition is unstable, the physician should first treat and stabilize the patient. Reassignment requests based on bigotry may be attributable to delirium, dementia, or psychosis, and patients’ preferences may change if reversible disorders are identified and treated. Patients with significantly impaired cognition are generally not held to be ethically responsible.

The article is here.

The brain as artificial intelligence: prospecting the frontiers of neuroscience

Fuller, S.
AI & Soc (2018).


This article explores the proposition that the brain, normally seen as an organ of the human body, should be understood as a biologically based form of artificial intelligence, in the course of which the case is made for a new kind of ‘brain exceptionalism’. After noting that such a view was generally assumed by the founders of AI in the 1950s, the argument proceeds by drawing on the distinction between science—in this case neuroscience—adopting a ‘telescopic’ or a ‘microscopic’ orientation to reality, depending on how it regards its characteristic investigative technologies. The paper concludes by recommending a ‘microscopic’ yet non-reductionist research agenda for neuroscience, in which the brain is seen as an underutilised organ whose energy efficiency is likely to outstrip that of the most powerful supercomputers for the foreseeable future.

The article is here.

Thursday, March 8, 2018

More Religious Leaders Challenge Silence, Isolation Surrounding Suicide

Cheryl Platzman Weinstock
Originally posted February 11, 2018

Here is an excerpt:

Until recently, many religious leaders were not well-prepared to talk about suicide with their congregants. Now some clergy have become an important part of suicide prevention.

"Where there's faith, there's hope, and where there's hope, there's life," says David Litts, co-leader of the Faith Communities Task Force of the National Action Alliance for Suicide Prevention.

Arnold also leads that task force. "If someone dies from heart disease, for instance, or in an accident, they may wonder where God is, but when someone dies by suicide, a whole lot of other questions get raised," she says. "When you can't talk about this in church, then it feels like God can't talk about it either."

But in her church, she says, there isn't shame surrounding suicide. During the pastoral prayer, for instance, she says she lifts up congregants dealing with cancer, heart disease or mental health issues. "It's a way of signaling to people this is a safe place to talk about such things and be honest about them."

The article is here.

Polluted Morality: Air Pollution Predicts Criminal Activity and Unethical Behavior

Jackson G. Lu, Julia J. Lee, Francesca Gino, Adam D. Galinsky
Psychological Science 
First Published February 7, 2018


Air pollution is a serious problem that affects billions of people globally. Although the environmental and health costs of air pollution are well known, the present research investigates its ethical costs. We propose that air pollution can increase criminal and unethical behavior by increasing anxiety. Analyses of a 9-year panel of 9,360 U.S. cities found that air pollution predicted six major categories of crime; these analyses accounted for a comprehensive set of control variables (e.g., city and year fixed effects, population, law enforcement) and survived various robustness checks (e.g., balanced panel, nonparametric bootstrapped standard errors). Three subsequent experiments involving American and Indian participants established the causal effect of psychologically experiencing a polluted (vs. clean) environment on unethical behavior. Consistent with our theoretical perspective, results revealed that anxiety mediated this effect. Air pollution not only corrupts people’s health, but also can contaminate their morality.

The research is here.

If you cannot get to the article, you can download it from here.

Wednesday, March 7, 2018

The Squishy Ethics of Sex With Robots

Adam Rogers
Originally published February 2, 2018

Here is an excerpt:

Most of the world is ready to accept algorithm-enabled, internet-connected, virtual-reality-optimized sex machines with open arms (arms! I said arms!). The technology is evolving fast, which means two inbound waves of problems. Privacy and security, sure, but even solving those won’t answer two very hard questions: Can a robot consent to having sex with you? Can you consent to sex with it?

One thing that is unquestionable: There is a market. Either through licensing the teledildonics patent or risking lawsuits, several companies have tried to build sex technology that takes advantage of Bluetooth and the internet. “Remote connectivity allows people on opposite ends of the world to control each other’s dildo or sleeve device,” says Maxine Lynn, a patent attorney who writes the blog Unzipped: Sex, Tech, and the Law. “Then there’s also bidirectional control, which is going to be huge in the future. That’s when one sex toy controls the other sex toy and vice versa.”

Vibease, for example, makes a wearable that pulsates in time to synchronized digital books or a partner controlling an app. We-vibe makes vibrators that a partner can control, or set preset patterns. And so on.

The article is here.

The Strange Order of Things: why feelings are the unstoppable force

John Banville
The Guardian
Originally posted February 2, 2018

Here is an excerpt:

“Feelings have not been given the credit they deserve as motives, monitors, and negotiators of human cultural endeavours.” In claiming simplicity, it is possible the author is being a mite disingenuous. The tone in which he sets out his argument is so carefully judged, so stylistically calm and scientifically collected, that most readers will be lulled into nodding agreement. Yet a moment’s thought will tell us that we conduct our lives largely in contradiction of his premise, and for the most part deal with each other, and even with ourselves, as if we were pure spirit accidentally and inconveniently shackled to half a hundredweight or so of forked flesh.

“Feelings, and more generally affect of any sort and strength,” Damasio writes, “are the unrecognised presences at the cultural conference table.” According to him, the conference began among the bacteria, which – who? – even in their “unminded existence … assume what can only be called a sort of ‘moral attitude’”. In support of his claim, he adduces the various ways in which bacteria behave that bear a striking resemblance to human social organisation. The implication is, then, that “the human unconscious literally goes back to early life-forms, deeper and further than Freud or Jung ever dreamed of”. Damasio’s argument is that we are directly descended not only from the apes, but from the earliest wrigglers at the bottom of the primordial rock pool.

The keyword throughout the book is homeostasis, of which he offers a number of definitions, the clearest of which is the earliest, and which he favours enough to set it in italics: homeostasis is the force – the word seems justified – that ensures that “life is regulated within a range that is not just compatible with survival but also conducive to flourishing, to a projection of life into the future of an organism or a species”.

The article is here.

Tuesday, March 6, 2018

Toward a Psychology of Moral Expansiveness

Daniel Crimston, Matthew J. Hornsey, Paul G. Bain, Brock Bastian
Current Directions in Psychological Science 
Vol 27, Issue 1, pp. 14 - 19


Theorists have long noted that people’s moral circles have expanded over the course of history, with modern people extending moral concern to entities—both human and nonhuman—that our ancestors would never have considered including within their moral boundaries. In recent decades, researchers have sought a comprehensive understanding of the psychology of moral expansiveness. We first review the history of conceptual and methodological approaches in understanding our moral boundaries, with a particular focus on the recently developed Moral Expansiveness Scale. We then explore individual differences in moral expansiveness, attributes of entities that predict their inclusion in moral circles, and cognitive and motivational factors that help explain what we include within our moral boundaries and why they may shrink or expand. Throughout, we highlight the consequences of these psychological effects for real-world ethical decision making.

The article is here.

Don't Blame PPC, Blame Poor Ethics

Kyle Infante
Originally posted on February 2, 2018

Here is an excerpt:

To sum up the entire debacle in a nutshell: Marketing entities would create referral ads and websites to bid on highly sought after addiction keywords, drive traffic to their call centers and send people to facilities-based purely on profit. There was no clinical or medical prescreening being conducted, no thought put into placing that individual with the appropriate level of care. Suffering addicts and alcoholics were being misled by strategic digital marketing tactics and pushed to the highest bidder. Often, these high bidders had a slew of ethical issues. This drove the cost per click for each ad through the roof, and soon enough only the Goliaths could compete on PPC (pay per click). Unless you had the money to hire an advertising agency or had an in-house marketer with extensive digital experience, there was no way to survive.

Recently, Google stepped in and placed restrictions on these ads to curb the gross abuse of the market. In September 2017, Google began to limit the kinds of ads facilities could create and just this year placed a temporary ban on all recovery ads to audit the entire industry.

The article is here.

Monday, March 5, 2018

Would you be willing to zap your child's brain? Public perspectives on parental responsibilities and the ethics of enhancing children with transcranial direct current stimulation

Katy Wagner, Hannah Maslen, Justin Oakley, and Julian Savulescu
AJOB Empirical Bioethics Vol. 0, Iss. ja, 2018


Transcranial direct current stimulation (tDCS) is an experimental brain stimulation technology that may one day be used to enhance the cognitive capacities of children. Discussion about the ethical issues that this would raise has rarely moved beyond expert circles. However, the opinions of the wider public can lead to more democratic policy decisions and broaden academic discussion of this issue.

We performed a quantitative survey of members of the US public. A between-subjects design was employed, where conditions varied based on the trait respondents considered for enhancement.

227 responses were included for analysis. Our key finding was that the majority were unwilling to enhance their child with tDCS. Respondents were most reluctant to enhance traits considered fundamental to the self (such as motivation and empathy). However, many respondents may give in to implicit coercion to enhance their child in spite of an initial reluctance. A ban on tDCS was not supported if it were to be used safely for the enhancement of mood or mathematical ability. Opposition to such a ban may be related to the belief that tDCS use would not represent cheating or violate authenticity (as it relates to achievements rather than identity).

The wider public appears to think that crossing the line from treatment to enhancement with tDCS would not be in a child's best interests. However, an important alternative interpretation of our results is that lay people may be willing to use enhancers that matched their preference for 'natural' enhancers. A ban on the safe use of tDCS for enhancing non-fundamental traits would be unlikely to garner public support. Nonetheless, it could become important to regulate tDCS in order to prevent misuse on children, because individuals reluctant to enhance may be likely to give in to implicit coercion to enhance their child.

The research is here.

Donald Trump and the rise of tribal epistemology

David Roberts
Originally posted May 19, 2017 and still extremely important

Here is an excerpt:

Over time, this leads to what you might call tribal epistemology: Information is evaluated based not on conformity to common standards of evidence or correspondence to a common understanding of the world, but on whether it supports the tribe’s values and goals and is vouchsafed by tribal leaders. “Good for our side” and “true” begin to blur into one.

Now tribal epistemology has found its way to the White House.

Donald Trump and his team represent an assault on almost every American institution — they make no secret of their desire to “deconstruct the administrative state” — but their hostility toward the media is unique in its intensity.

It is Trump’s obsession and favorite target. He sees himself as waging a “running war” on the mainstream press, which his consigliere Steve Bannon calls “the opposition party.”

The article is here.

Sunday, March 4, 2018

Increasing honesty in humans with noninvasive brain stimulation

Michel André Maréchal, Alain Cohn, Giuseppe Ugazio and Christian C. Ruff
Proceedings of the National Academy of Sciences (PNAS)
April, 114(17), 4360-4364


Honesty plays a key role in social and economic interactions and is crucial for societal functioning. However, breaches of honesty are pervasive and cause significant societal and economic problems that can affect entire nations. Despite its importance, remarkably little is known about the neurobiological mechanisms supporting honest behavior. We demonstrate that honesty can be increased in humans with transcranial direct current stimulation (tDCS) over the right dorsolateral prefrontal cortex. Participants (n = 145) completed a die-rolling task where they could misreport their outcomes to increase their earnings, thereby pitting honest behavior against personal financial gain. Cheating was substantial in a control condition but decreased dramatically when neural excitability was enhanced with tDCS. This increase in honesty could not be explained by changes in material self-interest or moral beliefs and was dissociated from participants’ impulsivity, willingness to take risks, and mood. A follow-up experiment (n = 156) showed that tDCS only reduced cheating when dishonest behavior benefited the participants themselves rather than another person, suggesting that the stimulated neural process specifically resolves conflicts between honesty and material self-interest. Our results demonstrate that honesty can be strengthened by noninvasive interventions and concur with theories proposing that the human brain has evolved mechanisms dedicated to control complex social behaviors.

The article is here.

Saturday, March 3, 2018

Why It's OK Behavioral Economics Failed To Prevent Heart Attacks

Peter Ubel
Originally published January 31, 2018

Here are two excerpts:

To increase the chance people will take these important pills, a team out of the University of Pennsylvania created a behavioral economic incentive. The intervention was multipronged. It included enrolling patients in lotteries, which gave them a chance to win money every day they took their pills. It encouraged patients to enlist a friend to help them stay on track taking their pills, a friend who would get notified every time they skipped their medications for a few days in a row.

But the intervention failed — it neither increased adherence to medications nor reduced hospitalizations for heart attacks. These results are shown in the figure below, which, despite appearances, shows two lines, representing the intervention group and the control group, respectively; the lines practically merge into one...


Sometimes behavioral economics is criticized for being over-hyped, for being touted as the answer to all our behavioral problems. I’ve been one of those critics. But my beef isn’t with behavioral economists — my research frequently draws upon insights from that field. My issue is with people who think of behavioral economics as some kind of magic wand we can wave over stubbornly harmful behavior. Changing people’s behavior is hard to do, especially without resorting to draconian measures.

We need to keep experimenting with ways to help people take care of their health.

The article is here.

Friday, March 2, 2018

Burnout in mental health providers

Practice Research and Policy Staff
American Psychological Association Practice Organization
Originally published January 25, 2018

Burnout commonly affects individuals involved in the direct care of others, including mental health practitioners. Burnout consists of three components: emotional exhaustion, depersonalization of clients and feelings of ineffectiveness or lack of personal accomplishment (Maslach, Jackson & Lieter, 1997). Emotional exhaustion may include feeling overextended, being unable to feel compassion for clients and feeling unable to meet workplace demands. Depersonalization is the process by which providers distance themselves from clients to prevent emotional fatigue. Finally, feelings of ineffectiveness and lack of personal accomplishment occur when practitioners feel a negative sense of personal and/or career worth.

Studies estimate that anywhere between 21 percent and 61 percent of mental health practitioners experience signs of burnout (Morse et al., 2012). Burnout has been associated with workplace climate, caseload size and severity of client symptoms (Acker, 2011; Craig & Sprang, 2010; Thompson et al., 2014). In contrast, studies examining burnout prevention have found that smaller caseloads, less paperwork and more flexibility at work are associated with lower rates of burnout (Lent & Schwartz, 2012). Burnout results in negative outcomes for both practitioners and their clients. Symptoms of burnout are not solely psychological; burnout has also been linked to physical ailments such as headaches and gastrointestinal problems (Kim et al., 2011).

The following studies examine correlates and predictors of burnout in mental health care providers. The first study investigates burnout amongst practitioners working on posttraumatic stress disorder clinical teams in Veterans Affairs (VA) health care settings. The second study examines correlates of burnout in sexual minority practitioners, and the third study investigates the impact of personality on burnout. Finally, the fourth study examines factors that may prevent burnout.

The information is here.

Thursday, March 1, 2018

Concern for Others Leads to Vicarious Optimism

Andreas Kappes, Nadira S. Faber, Guy Kahane, Julian Savulescu, Molly J. Crockett
Psychological Science 
First Published January 30, 2018


An optimistic learning bias leads people to update their beliefs in response to better-than-expected good news but neglect worse-than-expected bad news. Because evidence suggests that this bias arises from self-concern, we hypothesized that a similar bias may affect beliefs about other people’s futures, to the extent that people care about others. Here, we demonstrated the phenomenon of vicarious optimism and showed that it arises from concern for others. Participants predicted the likelihood of unpleasant future events that could happen to either themselves or others. In addition to showing an optimistic learning bias for events affecting themselves, people showed vicarious optimism when learning about events affecting friends and strangers. Vicarious optimism for strangers correlated with generosity toward strangers, and experimentally increasing concern for strangers amplified vicarious optimism for them. These findings suggest that concern for others can bias beliefs about their future welfare and that optimism in learning is not restricted to oneself.

From the Discussion section

Optimism is a self-centered phenomenon in which people underestimate the likelihood of negative future events for themselves compared with others (Weinstein, 1980). Usually, the “other” is defined as a group of average others—an anonymous mass. When past studies asked participants to estimate the likelihood of an event happening to either themselves or the average population, participants did not show a learning bias for the average population (Garrett & Sharot, 2014). These findings are unsurprising given that people typically feel little concern for anonymous groups or anonymous individual strangers (Kogut & Ritov, 2005; Loewenstein et al., 2005). Yet people do care about identifiable others, and we accordingly found that people exhibit an optimistic learning bias for identifiable strangers and, even more markedly, for friends. Our research thereby suggests that optimism in learning is not restricted to oneself. We see not only our own lives through rose-tinted glasses but also the lives of those we care about.

The research is here.

Monkeys? Humans? The ethics of testing diesel fumes

Joel Gunter
BBC News
Originally published January 30, 2018

"These tests on monkeys or even humans cannot be justified ethically in any way," said Steffen Seibert, a spokesman for German Chancellor Angela Merkel.

Environment Minister Barbara Hendricks called the experiments "abominable", opposition politician Stephan Weil said they were "absurd and abhorrent".

But in a world where animal testing and paid medical testing on humans is commonplace, why have these particular tests provoked such outrage?

The exact nature of the VW tests is not known, as their methodology and findings have not been made public, but two independent scientists who have conducted air pollution tests on human volunteers told the BBC that similar tests on humans are commonplace.

"There have been hundreds of such studies, in most countries in the world, over the last 30 years," said Frank Kelly, professor of environmental health at King's College London. "They are funded by national governments, following strict ethical review, to understand the impact of emissions on human health."

The controversial, and possibly unethical, aspect of the VW testing was that it had been funded by a lobby group rather than an independent, government-funded body, he said.

The article is here.