Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, March 31, 2019

Is Ethical A.I. Even Possible?

Cade Metz
The New York Times
Originally posted March 1, 2019

Here is an excerpt:

As companies and governments deploy these A.I. technologies, researchers are also realizing that some systems are woefully biased. Facial recognition services, for instance, can be significantly less accurate when trying to identify women or someone with darker skin. Other systems may include security holes unlike any seen in the past. Researchers have shown that driverless cars can be fooled into seeing things that are not really there.

All this means that building ethical artificial intelligence is an enormously complex task. It gets even harder when stakeholders realize that ethics are in the eye of the beholder.

As some Microsoft employees protest the company’s military contracts, Mr. Smith said that American tech companies had long supported the military and that they must continue to do so. “The U.S. military is charged with protecting the freedoms of this country,” he told the conference. “We have to stand by the people who are risking their lives.”

Though some Clarifai employees draw an ethical line at autonomous weapons, others do not. Mr. Zeiler argued that autonomous weapons will ultimately save lives because they would be more accurate than weapons controlled by human operators. “A.I. is an essential tool in helping weapons become more accurate, reducing collateral damage, minimizing civilian casualties and friendly fire incidents,” he said in a statement.

The info is here.

Saturday, March 30, 2019

AI Safety Needs Social Scientists

Geoffrey Irving and Amanda Askell
Originally published February 19, 2019

Here is an excerpt:

Learning values by asking humans questions

We start with the premise that human values are too complex to describe with simple rules. By “human values” we mean our full set of detailed preferences, not general goals such as “happiness” or “loyalty”. One source of complexity is that values are entangled with a large number of facts about the world, and we cannot cleanly separate facts from values when building ML models. For example, a rule that refers to “gender” would require an ML model that accurately recognizes this concept, but Buolamwini and Gebru found that several commercial gender classifiers with a 1% error rate on white men failed to recognize black women up to 34% of the time. Even where people have correct intuition about values, we may be unable to specify precise rules behind these intuitions. Finally, our values may vary across cultures, legal systems, or situations: no learned model of human values will be universally applicable.

If humans can’t reliably report the reasoning behind their intuitions about values, perhaps we can make value judgements in specific cases. To realize this approach in an ML context, we ask humans a large number of questions about whether an action or outcome is better or worse, then train on this data. “Better or worse” will include both factual and value-laden components: for an AI system trained to say things, “better” statements might include “rain falls from clouds”, “rain is good for plants”, “many people dislike rain”, etc. If the training works, the resulting ML system will be able to replicate human judgement about particular situations, and thus have the same “fuzzy access to approximate rules” about values as humans. We also train the ML system to come up with proposed actions, so that it knows both how to perform a task and how to judge its performance. This approach works at least in simple cases, such as Atari games and simple robotics tasks and language-specified goals in gridworlds. The questions we ask change as the system learns to perform different types of actions, which is necessary as the model of what is better or worse will only be accurate if we have applicable data to generalize from.

The info is here.

Friday, March 29, 2019

Artificial Morality

Robert Koehler
Originally posted March 14, 2019

Artificial Intelligence is one thing. Artificial morality is another. It may sound something like this:

“First, we believe in the strong defense of the United States and we want the people who defend it to have access to the nation’s best technology, including from Microsoft.”

The words are those of Microsoft president Brad Smith, writing on a corporate blogsite last fall in defense of the company’s new contract with the U.S. Army, worth $479 million, to make augmented reality headsets for use in combat. The headsets, known as the Integrated Visual Augmentation System, or IVAS, are a way to “increase lethality” when the military engages the enemy, according to a Defense Department official. Microsoft’s involvement in this program set off a wave of outrage among the company’s employees, with more than a hundred of them signing a letter to the company’s top executives demanding that the contract be canceled.

“We are a global coalition of Microsoft workers, and we refuse to create technology for warfare and oppression. We are alarmed that Microsoft is working to provide weapons technology to the U.S. Military, helping one country’s government ‘increase lethality’ using tools we built. We did not sign up to develop weapons, and we demand a say in how our work is used.”

The info is here.

The history and future of digital health in the field of behavioral medicine

Danielle Arigo, Danielle E. Jake-Schoffman, Kathleen Wolin, Ellen Beckjord, & Eric B. Hekler
J Behav Med (2019) 42: 67.


Since its earliest days, the field of behavioral medicine has leveraged technology to increase the reach and effectiveness of its interventions. Here, we highlight key areas of opportunity and recommend next steps to further advance intervention development, evaluation, and commercialization with a focus on three technologies: mobile applications (apps), social media, and wearable devices. Ultimately, we argue that future of digital health behavioral science research lies in finding ways to advance more robust academic-industry partnerships. These include academics consciously working towards preparing and training the work force of the twenty first century for digital health, actively working towards advancing methods that can balance the needs for efficiency in industry with the desire for rigor and reproducibility in academia, and the need to advance common practices and procedures that support more ethical practices for promoting healthy behavior.

Here is a portion of the Summary

An unknown landscape of privacy and data security

Another relatively new set of challenges centers around the issues of privacy and data security presented by digital health tools. First, some commercially available technologies that were originally produced for purposes other than promoting healthy behavior (e.g., social media) are now being used to study health behavior and deliver interventions. This poses a variety of potential privacy issues depending on the privacy settings used, including the fact that data from non-participants may inadvertently be viewed and collected, and their rights should also be considered as part of study procedures (Arigo et al., 2018).  Privacy may be of particular concern as apps begin to incorporate additional smartphone technologies such as GPS location tracking and cameras (Nebeker et al., 2015).  Second, for commercial products that were originally designed for health behavior change (e.g., apps), researchers need to carefully read and understand the associated privacy and security agreements, be sure that participants understand these agreements, and include a summary of this information in their applications to ethics review boards.

Thursday, March 28, 2019

An Empirical Evaluation of the Failure of the Strickland Standard to Ensure Adequate Counsel to Defendants with Mental Disabilities Facing the Death Penalty

Michael L. Perlin, Talia Roitberg Harmon, & Sarah Chatt
Social Science Research Network 


Anyone who has been involved with death penalty litigation in the past four decades knows that one of the most scandalous aspects of that process—in many ways, the most scandalous—is the inadequacy of counsel so often provided to defendants facing execution. By now, virtually anyone with even a passing interest is well versed in the cases and stories about sleeping lawyers, missed deadlines, alcoholic and disoriented lawyers, and, more globally, lawyers who simply failed to vigorously defend their clients. This is not news.

And, in the same vein, anyone who has been so involved with this area of law and policy for the past 35 years knows that it is impossible to make sense of any of these developments without a deep understanding of the Supreme Court’s decision in Strickland v. Washington, 466 U.S. 668 (1984), the case that established a pallid, virtually-impossible-to fail test for adequacy of counsel in such litigation. Again, this is not news.

We also know that some of the most troubling results in Strickland interpretations have come in cases in which the defendant was mentally disabled—either by serious mental illness or by intellectual disability. Some of the decisions in these cases—rejecting Strickland-based appeals—have been shocking, making a mockery out of a constitutionally based standard.

To the best of our knowledge, no one has—prior to this article—undertaken an extensive empirical analysis of how one discrete US federal circuit court of appeals has dealt with a wide array of Strickland-claim cases in cases involving defendants with mental disabilities. We do this here. In this article, we reexamine these issues from the perspective of the 198 state cases decided in the Fifth Circuit from 1984 to 2017 involving death penalty verdicts in which, at some stage of the appellate process, a Strickland claim was made (in which there were only 13 cases in which any relief was even preliminarily granted under Strickland). As we demonstrate subsequently, Strickland is indeed a pallid standard, fostering “tolerance of abysmal lawyering,” and is one that makes a mockery of the most vital of constitutional law protections: the right to adequate counsel.

This article will proceed in this way. First, we discuss the background of the development of counsel adequacy in death penalty cases. Next, we look carefully at Strickland, and the subsequent Supreme Court cases that appear—on the surface—to bolster it in this context. We then consider multiple jurisprudential filters that we believe must be taken seriously if this area of the law is to be given any authentic meaning. Next, we will examine and interpret the data that we have developed, looking carefully at what happened after the Strickland-ordered remand in the 13 Strickland “victories.” Finally, we will look at this entire area of law through the filter of therapeutic jurisprudence, and then explain why and how the charade of adequacy of counsel law fails miserably to meet the standards of this important school of thought.

Behind the Scenes, Health Insurers Use Cash and Gifts to Sway Which Benefits Employers Choose

Marshall Allen
Originally posted February 20, 2019

Here is an excerpt:

These industry payments can’t help but influence which plans brokers highlight for employers, said Eric Campbell, director of research at the University of Colorado Center for Bioethics and Humanities.

“It’s a classic conflict of interest,” Campbell said.

There’s “a large body of virtually irrefutable evidence,” Campbell said, that shows drug company payments to doctors influence the way they prescribe. “Denying this effect is like denying that gravity exists.” And there’s no reason, he said, to think brokers are any different.

Critics say the setup is akin to a single real estate agent representing both the buyer and seller in a home sale. A buyer would not expect the seller’s agent to negotiate the lowest price or highlight all the clauses and fine print that add unnecessary costs.

“If you want to draw a straight conclusion: It has been in the best interest of a broker, from a financial point of view, to keep that premium moving up,” said Jeffrey Hogan, a regional manager in Connecticut for a national insurance brokerage and one of a band of outliers in the industry pushing for changes in the way brokers are paid.

The info is here.

Wednesday, March 27, 2019

Language analysis reveals recent and unusual 'moral polarisation' in Anglophone world

Andrew Masterson
Cosmos Magazine
Originally published March 4, 2019

Here is an excerpt:

Words conveying moral values in more specific domains, however, did not always accord to a similar pattern – revealing, say the researchers, the changing prominence of differing sets of concerns surrounding concepts such as loyalty and betrayal, individualism, and notions of authority.

Remarkably, perhaps, the study is only the second in the academic literature that uses big data to examine shifts in moral values over time. The first, by psychologists Pelin and Selin Kesibir, and published in The Journal of Positive Psychology in 2012, used two approaches to track the frequency of morally-loaded words in a corpus of US books across the twentieth century.

The results revealed a “decline in the use of general moral terms”, and significant downturns in the use of words such as honesty, patience, and compassion.

Haslam and colleagues found that at headline level their results, using a larger dataset, reflected the earlier findings. However, fine-grain investigations revealed a more complex picture. Nevertheless, they say, the changes in the frequency of use for particular types of moral terms is sufficient to allow the twentieth century to be divided into five distinct historical periods.

The words used in the search were taken from lists collated under what is known as Moral Foundations Theory (MFT), a generally supported framework that rejects the idea that morality is monolithic. Instead, the researchers explain, MFT aims to “categorise the automatic and intuitive emotional reactions that commonly occur in moral evaluation across cultures, and [identifies] five psychological systems (or foundations): Harm, Fairness, Ingroup, Authority, and Purity.”

The info is here.

The Value Of Ethics And Trust In Business.. With Artificial Intelligence

Stephen Ibaraki
Originally posted March 2, 2019

Here is an excerpt:

Increasingly contributing positively to society and driving positive change are a growing discourse around the world and hitting all sectors and disruptive technologies such as Artificial Intelligence (AI).

With more than $20 Trillion USD wealth transfer from baby boomers to millennials, and their focus on the environment and social impact, this trend will accelerate. Business is aware and and taking the lead in this movement of advancing the human condition in a responsible and ethical manner. Values-based leadership, diversity, inclusion, investment and long-term commitment are the multi-stakeholder commitments going forward.

“Over the last 12 years, we have repeatedly seen that those companies who focus on transparency and authenticity are rewarded with the trust of their employees, their customers and their investors. While negative headlines might grab attention, the companies who support the rule of law and operate with decency and fair play around the globe will always succeed in the long term,” explained Ethisphere CEO, Timothy Erblich. “Congratulations to all of the 2018 honorees.”

The info is here.

Tuesday, March 26, 2019

Does AI Ethics Have a Bad Name?

Calum Chace
Originally posted March 7, 2019

Here is an excerpt:

Artificial intelligence is a technology, and a very powerful one, like nuclear fission.  It will become increasingly pervasive, like electricity. Some say that its arrival may even turn out to be as significant as the discovery of fire.  Like nuclear fission, electricity and fire, AI can have positive impacts and negative impacts, and given how powerful it is and it will become, it is vital that we figure out how to promote the positive outcomes and avoid the negative ones.

It's the bias that concerns people in the AI ethics community.  They want to minimise the amount of bias in the data which informs the AI systems that help us to make decisions – and ideally, to eliminate the bias altogether.  They want to ensure that tech giants and governments respect our privacy at the same time as they develop and deliver compelling products and services. They want the people who deploy AI to make their systems as transparent as possible so that in advance or in retrospect, we can check for sources of bias and other forms of harm.

But if AI is a technology like fire or electricity, why is the field called “AI ethics”?  We don’t have “fire ethics” or “electricity ethics,” so why should we have AI ethics?  There may be a terminological confusion here, and it could have negative consequences.

One possible downside is that people outside the field may get the impression that some sort of moral agency is being attributed to the AI, rather than to the humans who develop AI systems.  The AI we have today is narrow AI: superhuman in certain narrow domains, like playing chess and Go, but useless at anything else. It makes no more sense to attribute moral agency to these systems than it does to a car or a rock.  It will probably be many years before we create an AI which can reasonably be described as a moral agent.

The info is here.

Should doctors cry at work?

Fran Robinson
BMJ 2019;364:l690

Many doctors admit to crying at work, whether openly empathising with a patient or on their own behind closed doors. Common reasons for crying are compassion for a dying patient, identifying with a patient’s situation, or feeling overwhelmed by stress and emotion.

Probably still more doctors have done so but been unwilling to admit it for fear that it could be considered unprofessional—a sign of weakness, lack of control, or incompetence. However, it’s increasingly recognised as unhealthy for doctors to bottle up their emotions.

Unexpected tragic events
Psychiatry is a specialty in which doctors might view crying as acceptable, says Annabel Price, visiting researcher at the Department of Psychiatry, University of Cambridge, and a consultant in liaison psychiatry for older adults.

Having discussed the issue with colleagues before being interviewed for this article, she says that none of them would think less of a colleague for crying at work: “There are very few doctors who haven’t felt like crying at work now and again.”

A situation that may move psychiatrists to tears is finding that a patient they’ve been closely involved with has died by suicide. “This is often an unexpected tragic event: it’s very human to become upset, and sometimes it’s hard not to cry when you hear difficult news,” says Price.

The info is here.

Monday, March 25, 2019

U.S. companies put record number of robots to work in 2018

Originally published February 28, 2019

U.S. companies installed more robots last year than ever before, as cheaper and more flexible machines put them within reach of businesses of all sizes and in more corners of the economy beyond their traditional foothold in car plants.

Shipments hit 28,478, nearly 16 percent more than in 2017, according to data seen by Reuters that was set for release on Thursday by the Association for Advancing Automation, an industry group based in Ann Arbor, Michigan.

Shipments increased in every sector the group tracks, except automotive, where carmakers cut back after finishing a major round of tooling up for new truck models.

The info is here.

Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability

Alex John London
The Hastings Center Report
Volume49, Issue1, January/February 2019, Pages 15-21


Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or a rationale for particular decisions in individual cases, some commentators regard ceding medical decision‐making to black box systems as contravening the profound moral responsibilities of clinicians. I argue, however, that opaque decisions are more common in medicine than critics realize. Moreover, as Aristotle noted over two millennia ago, when our knowledge of causal systems is incomplete and precarious—as it often is in medicine—the ability to explain how results are produced can be less important than the ability to produce such results and empirically verify their accuracy.

The info is here.

Sunday, March 24, 2019

An Ethical Obligation for Bioethicists to Utilize Social Media

Herron, PD
Hastings Cent Rep. 2019 Jan;49(1):39-40.
doi: 10.1002/hast.978.

Here is an excerpt:

Unfortunately, it appears that bioethicists are no better informed than other health professionals, policy experts, or (even) elected officials, and they are sometimes resistant to becoming informed. But bioethicists have a duty to develop our knowledge and usefulness with respect to social media; many of our skills can and should be adapted to this area. There is growing evidence of the power of social media to foster dissemination of misinformation. The harms associated with misinformation or “fake news” are not new threats. Historically, there have always been individuals or organized efforts to propagate false information or to deceive others. Social media and other technologies have provided the ability to rapidly and expansively share both information and misinformation. Bioethics serves society by offering guidance about ethical issues associated with advances in medicine, science, and technology. Much of the public’s conversation about and exposure to these emerging issues occurs online. If we bioethicists are not part of the mix, we risk yielding to alternative and less authoritative sources of information. Social media’s transformative impact has led some to view it as not just a personal tool but the equivalent to a public utility, which, as such, should be publicly regulated. Bioethicists can also play a significant part in this dialogue. But to do so, we need to engage with social media. We need to ensure that our understanding of social media is based on experiential use, not just abstract theory.

Bioethics has expanded over the past few decades, extending beyond the academy to include, for example, clinical ethics consultants and leadership positions in public affairs and public health policy. These varied roles bring weighty responsibilities and impose a need for critical reflection on how bioethicists can best serve the public interest in a way that reflects and is accountable to the public’s needs.

Saturday, March 23, 2019

The Fake Sex Doctor Who Conned the Media Into Publicizing His Bizarre Research on Suicide, Butt-Fisting, and Bestiality

Jennings Brown
Originally published March 1, 2019

Here is an excerpt:

Despite Sendler’s claims that he is a doctor, and despite the stethoscope in his headshot, he is not a licensed doctor of medicine in the U.S. Two employees of the Harvard Medical School registrar confirmed to me that Sendler was never enrolled and never received a MD from the medical school. A Harvard spokesperson told me Sendler never received a PhD or any degree from Harvard University.

“I got into Harvard Medical School for MD, PhD, and Masters degree combined,” Sendler told me. I asked if he was able to get a PhD in sexual behavior from Harvard Medical School (Harvard Medical School does not provide any sexual health focuses) and he said “Yes. Yes,” without hesitation, then doubled-down: “I assume that there’s still some kind of sense of wonder on campus [about me]. Because I can see it when I go and visit [Harvard], that people are like, ‘Wow you had the balls, because no one else did that,’” presumably referring to his academic path.

Sendler told me one of his mentors when he was at Harvard Medical School was Yi Zhang, a professor of genetics at the school. Sendler said Zhang didn’t believe in him when he was studying at Harvard. But, Sendler said, he met with Zhang in Boston just a month prior to our interview. And Zhang was now impressed by Sendler’s accomplishments.

Sendler said Zhang told him in January, “Congrats. You did what you felt was right... Turns out, wow, you have way more power in research now than I do. And I’m just very proud of you, because I have people that I really put a lot of effort, after you left, into making them the best and they didn’t turn out that well.”

The info is here.

This is a fairly bizarre story and worth the long read.

Friday, March 22, 2019

We need to talk about systematic fraud

Jennifer Byrne
Nature 566, 9 (2019)
doi: 10.1038/d41586-019-00439-9

Here is an excerpt:

Some might argue that my efforts are inconsequential, and that the publication of potentially fraudulent papers in low-impact journals doesn’t matter. In my view, we can’t afford to accept this argument. Such papers claim to uncover mechanisms behind a swathe of cancers and rare diseases. They could derail efforts to identify easily measurable biomarkers for use in predicting disease outcomes or whether a drug will work. Anyone trying to build on any aspect of this sort of work would be wasting time, specimens and grant money. Yet, when I have raised the issue, I have had comments such as “ah yes, you’re working on that fraud business”, almost as a way of closing down discussion. Occasionally, people’s reactions suggest that ferreting out problems in the literature is a frivolous activity, done for personal amusement, or that it is vindictive, pursued to bring down papers and their authors.

Why is there such enthusiasm for talking about faulty research practices, yet such reluctance to discuss deliberate deception? An analysis of the Diederik Stapel fraud case that rocked the psychology community in 2011 has given me some ideas (W. Stroebe et al. Perspect. Psychol. Sci. 7, 670–688; 2012). Fraud departs from community norms, so scientists do not want to think about it, let alone talk about it. It is even more uncomfortable to think about organized fraud that is so frequently associated with one country. This becomes a vicious cycle: because fraud is not discussed, people don’t learn about it, so they don’t consider it, or they think it’s so rare that it’s unlikely to affect them, and so papers are less likely to come under scrutiny. Thinking and talking about systematic fraud is essential to solving this problem. Raising awareness and the risk of detection may well prompt new ways to identify papers produced by systematic fraud.

Last year, China announced sweeping plans to curb research misconduct. That’s a great first step. Next should be a review of publication quotas and cash rewards, and the closure of ‘paper factories’.

The info is here.

Pop Culture, AI And Ethics

Phaedra Boinodiris
Originally published February 24, 2019

Here is an excerpt:

5 Areas of Ethical Focus

The guide goes on to outline five areas of ethical focus or consideration:

Accountability – there is a group responsible for ensuring that REAL guests in the hotel are interviewed to determine their needs. When feedback is negative this group implements a feedback loop to better understand preferences. They ensure that at any point in time, a guest can turn the AI off.

Fairness – If there is bias in the system, the accountable team must take the time to train with a larger, more diverse set of data.Ensure that the data collected about a user's race, gender, etc. in combination with their usage of the AI, will not be used to market to or exclude certain demographics.

Explainability and Enforced Transparency – if a guest doesn’t like the AI’s answer, she can ask how it made that recommendation using which dataset. A user must explicitly opt in to use the assistant and provide the guest options to consent on what information to gather.

User Data Rights – The hotel does not own a guest’s data and a guest has the right to have the system purges at any time. Upon request, a guest can receive a summary of what information was gathered by the Ai assistant.

Value Alignment – Align the experience to the values of the hotel. The hotel values privacy and ensuring that guests feel respected and valued. Make it clear that the AI assistant is not designed to keep data or monitor guests. Relay how often guest data is auto deleted. Ensure that the AI can speak in the guest’s respective language.

The info is here.

Thursday, March 21, 2019

Anger as a moral emotion: A 'bird's eye view' systematic review

Tim Lomas
Counseling Psychology Quarterly

Anger is common problem for which counseling/psychotherapy clients seek help, and is typically regarded as an invidious negative emotion to be ameliorated. However, it may be possible to reframe anger as a moral emotion, arising in response to perceived transgressions, thereby endowing it with meaning. In that respect, the current paper offers a ‘bird’s eye’ systematic review of empirical research on anger as a moral emotion (i.e., one focusing broadly on the terrain as a whole, rather than on specific areas). Three databases were reviewed from the start of their records to January 2019. Eligibility criteria included empirical research, published in English in peer-reviewed journals, on anger specifically as a moral emotion. 175 papers met the criteria, and fell into four broad classes of study: survey-based; experimental; physiological; and qualitative. In reviewing the articles, this paper pays particular attention to: how/whether anger can be differentiated from other moral emotions; antecedent causes and triggers; contextual factors that influence or mitigate anger; and outcomes arising from moral anger. Together, the paper offers a comprehensive overview of current knowledge into this prominent and problematic emotion. The results may be of use to counsellors and psychotherapists helping to address anger issues in their clients.

Download the paper here.

Note: Other "symptoms" in mental health can also be reframed as moral issues.  PTSD is similar to Moral Injury.  OCD is highly correlated with scrupulosity, excessive concern about moral purity.  Unhealthy guilt is found in many depressed individuals.  And, psychologists used forgiveness of self and others as a goal in treatment.

China’s CRISPR twins might have had their brains inadvertently enhanced

Antonio Regalado 
MIT Technology Review
Originally posted February 21, 2019

The brains of two genetically edited girls born in China last year may have been changed in ways that enhance cognition and memory, scientists say.

The twins, called Lulu and Nana, reportedly had their genes modified before birth by a Chinese scientific team using the new editing tool CRISPR. The goal was to make the girls immune to infection by HIV, the virus that causes AIDS.

Now, new research shows that the same alteration introduced into the girls’ DNA, deletion of a gene called CCR5, not only makes mice smarter but also improves human brain recovery after stroke, and could be linked to greater success in school.

“The answer is likely yes, it did affect their brains,” says Alcino J. Silva, a neurobiologist at the University of California, Los Angeles, whose lab uncovered a major new role for the CCR5 gene in memory and the brain’s ability to form new connections.

“The simplest interpretation is that those mutations will probably have an impact on cognitive function in the twins,” says Silva. He says the exact effect on the girls’ cognition is impossible to predict, and “that is why it should not be done.”

The info is here.

Wednesday, March 20, 2019

Israel Approves Compassionate Use of MDMA to Treat PTSD

Ido Efrati
Originally posted February 10, 2019

MDMA, popularly known as ecstasy, is a drug more commonly associated with raves and nightclubs than a therapist’s office.

Emerging research has shown promising results in using this “party drug” to treat patients suffering from post-traumatic stress disorder, and Israel’s Health Ministry has just approved the use of MDMA to treat dozens of patients.

MDMA is classified in Israel as a “dangerous drug”, recreational use is illegal, and therapeutic use of MDMA has yet to be formally approved and is still in clinical trials.

However, this treatment is deemed as “compassionate use,” which allows drugs that are still in development to be made available to patients outside of a clinical trial due to the lack of effective alternatives.

The info is here.

Should This Exist? The Ethics Of New Technology

Lulu Garcia-Navarro
Originally posted March 3, 2019

Not every new technology product hits the shelves.

Tech companies kill products and ideas all the time — sometimes it's because they don't work, sometimes there's no market.

Or maybe, it might be too dangerous.

Recently, the research firm OpenAI announced that it would not be releasing a version of a text generator they developed, because of fears that it could be misused to create fake news. The text generator was designed to improve dialogue and speech recognition in artificial intelligence technologies.

The organization's GPT-2 text generator can generate paragraphs of coherent, continuing text based off of a prompt from a human. For example, when inputted with the claim, "John F. Kennedy was just elected President of the United States after rising from the grave decades after his assassination," the generator spit out the transcript of "his acceptance speech" that read in part:
It is time once again. I believe this nation can do great things if the people make their voices heard. The men and women of America must once more summon our best elements, all our ingenuity, and find a way to turn such overwhelming tragedy into the opportunity for a greater good and the fulfillment of all our dreams.
Considering the serious issues around fake news and online propaganda that came to light during the 2016 elections, it's easy to see how this tool could be used for harm.

The info is here.

Tuesday, March 19, 2019

Treasury Secretary Steven Mnuchin's Hollywood ties spark ethics questions in China trade talks

Emma Newburger
Originally posted March 15, 2019

Treasury Secretary Steven Mnuchin, one of President Donald Trump's key negotiators in the U.S.-China trade talks, has pushed Beijing to grant the American film industry greater access to its markets.

But now, Mnuchin’s ties to Hollywood are raising ethical questions about his role in those negotiations. Mnuchin had been a producer in a raft of successful films prior to joining the Trump administration.

In 2017, he divested his stake in a film production company after joining the White House. But he sold that position to his wife, filmmaker and actress Louise Linton, for between $1 million and $2 million, The New York Times reported on Thursday. At the time, she was his fiancée.

That company, StormChaser Partners, helped produce the mega-hit movie “Wonder Woman,” which grossed $90 million in China, according to the Times. Yet, because of China’s restrictions on foreign films, the producers received a small portion of that money. Mnuchin has been personally engaged in trying to ease those rules, which could be a boon to the industry, according to the Times.

The info is here.

We're Teaching Consent All Wrong

Sarah Sparks
Originally published January 8, 2019

Here is an excerpt:

Instead, researchers and educators offer an alternative: Teach consent as a life skill—not just a sex skill—beginning in early childhood, and begin discussing consent and communication in the context of relationships by 5th or 6th grades, before kids start seriously thinking about sex. (Think that's too young? In yet another study, the CDC found 8 in 10 teenagers didn't get sex education until after they'd already had sex.)

Educators and parents often balk at discussing strategies for and examples of consent because "they incorrectly believe that if you teach consent, students will become more sexually active," said Mike Domitrz, founder of the Date Safe Project, a Milwaukee-based sexual-assault prevention program that focuses on consent education and bystander interventions. "It's a myth. Students of both genders are pretty consistent that a lot of the sexual activity that is going on is occurring under pressure."

Studies suggest young women are more likely to judge consent on verbal communication and young men relied more on nonverbal cues, though both groups said nonverbal signals are often misinterpreted. And teenagers can be particularly bad at making decisions about risky behavior, including sexual situations, while under social pressure. Brain studies have found adolescents are more likely to take risks and less likely to think about negative consequences when they are in emotionally arousing, or "hot," situations, and that bad decision-making tends to get even worse when they feel they are being judged by their friends.

Making understanding and negotiating consent a life skill gives children and adolescents ways to understand and respect both their own desires and those of other people. And it can help educators frame instruction about consent without sinking into the morass of long-running arguments and anxiety over gender roles, cultural values, and teen sexuality.

The info is here.

Monday, March 18, 2019

The college admissions scandal is a morality play

Elaine Ayala
San Antonio Express-News
Originally posted March 16, 2019

The college admission cheating scandal that raced through social media and dominated news cycles this week wasn’t exactly shocking: Wealthy parents rigged the system for their underachieving children.

It’s an ancient morality play set at elite universities with an unseemly cast of characters: spoiled teens and shameless parents; corrupt test proctors and paid test takers; as well as college sports officials willing to be bribed and a ring leader who ultimately turned on all of them.

William “Rick” Singer, who went to college in San Antonio, wore a wire to cooperate with FBI investigators.


Yet even though they were arrested, the 50 people involved managed to secure the best possible outcome under the circumstances. Unlike many caught shoplifting or possessing small amounts of marijuana and who lack the lawyers and resources to help them navigate the legal system, the accused parents and coaches quickly posted bond and were promptly released without spending much time in custody.

The info is here.

OpenAI's Realistic Text-Generating AI Triggers Ethics Concerns

William Falcon
Originally posted February 18, 2019

Here is an excerpt:

Why you should care.

GPT-2 is the closest AI we have to make conversational AI a possibility. Although conversational AI is far from solved, chatbots powered by this technology could help doctors scale advice over chats, scale advice for potential suicide victims, improve translation systems, and improve speech recognition across applications.

Although OpenAI acknowledges these potential benefits, it also acknowledges the potential risks of releasing the technology. Misuse could include, impersonate others online, generate misleading news headlines, or automate the automation of fake posts to social media.

But I argue these malicious applications are already possible without this AI. There exist other public models which can already be used for these purposes. Thus, I think not releasing this code is more harmful to the community because A) it sets a bad precedent for open research, B) keeps companies from improving their services, C) unnecessarily hypes these results and D) may trigger unnecessary fears about AI in the general public.

The info is here.

Sunday, March 17, 2019

Actions Speak Louder Than Outcomes in Judgments of Prosocial Behavior

Daniel A. Yudkin, Annayah M. B. Prosser, and Molly J. Crockett
Emotion (2018).

Recently proposed models of moral cognition suggest that people's judgments of harmful acts are influenced by their consideration both of those acts' consequences ("outcome value"), and of the feeling associated with their enactment ("action value"). Here we apply this framework to judgments of prosocial behavior, suggesting that people's judgments of the praiseworthiness of good deeds are determined both by the benefit those deeds confer to others and by how good they feel to perform. Three experiments confirm this prediction. After developing a new measure to assess the extent to which praiseworthiness is influenced by action and outcome values, we show how these factors make significant and independent contributions to praiseworthiness. We also find that people are consistently more sensitive to action than to outcome value in judging the praiseworthiness of good deeds, but not harmful deeds. This observation echoes the finding that people are often insensitive to outcomes in their giving behavior. Overall, this research tests and validates a novel framework for understanding moral judgment, with implications for the motivations that underlie human altruism.

Here is an excerpt:

On a broader level, past work has suggested that judging the wrongness of harmful actions involves a process of “evaluative simulation,” whereby we evaluate the moral status of another’s action by simulating the affective response that we would experience performing the action ourselves (Miller et al., 2014). Our results are consistent with the possibility that evaluative simulation also plays a role in judging the praiseworthiness of helpful actions.  If people evaluate helpful actions by simulating what it feels like to perform the action, then we would expect to see similar biases in moral evaluation as those that exist for moral action. Previous work has shown that individuals often do not act to maximize the benefits that others receive, but instead to maximize the good feelings associated with performing good deeds (Berman et al., 2018; Gesiarz & Crockett, 2015; Ribar & Wilhelm, 2002). Thus, the asymmetry in moral evaluation seen in the present studies may reflect a correspondence between first-person moral decision-making and third-person moral evaluation.

Download the pdf here.

Saturday, March 16, 2019

How Should AI Be Developed, Validated, and Implemented in Patient Care?

Michael Anderson and Susan Leigh Anderson
AMA J Ethics. 2019;21(2):E125-130.
doi: 10.1001/amajethics.2019.125.


Should an artificial intelligence (AI) program that appears to have a better success rate than human pathologists be used to replace or augment humans in detecting cancer cells? We argue that some concerns—the “black-box” problem (ie, the unknowability of how output is derived from input) and automation bias (overreliance on clinical decision support systems)—are not significant from a patient’s perspective but that expertise in AI is required to properly evaluate test results.

Here is an excerpt:

Automation bias. Automation bias refers generally to a kind of complacency that sets in when a job once done by a health care professional is transferred to an AI program. We see nothing ethically or clinically wrong with automation, if the program achieves a virtually 100% success rate. If, however, the success rate is lower than that—92%, as in the case presented—it’s important that we have assurances that the program has quality input; in this case, that probably means that the AI program “learned” from a cross section of female patients of diverse ages and races. With diversity of input secured, what matters most, ethically and clinically, is that that the AI program has a higher cancer cell-detection success rate than human pathologists.

Friday, March 15, 2019

Ethical considerations on the complicity of psychologists and scientists in torture

Evans NG, Sisti DA, Moreno JD
Journal of the Royal Army Medical Corps 
Published Online First: 20 February 2019.
doi: 10.1136/jramc-2018-001008


The long-standing debate on medical complicity in torture has overlooked the complicity of cognitive scientists—psychologists, psychiatrists and neuroscientists—in the practice of torture as a distinct phenomenon. In this paper, we identify the risk of the re-emergence of torture as a practice in the USA, and the complicity of cognitive scientists in these practices.

We review arguments for physician complicity in torture. We argue that these defences fail to defend the complicity of cognitive scientists. We address objections to our account, and then provide recommendations for professional associations in resisting complicity in torture.

Arguments for cognitive scientist complicity in torture fail when those actions stem from the same reasons as physician complicity. Cognitive scientist involvement in the torture programme has, from the outset, been focused on the outcomes of interrogation rather than supportive care. Any possibility of a therapeutic relationship between cognitive therapists and detainees is fatally undermined by therapists’ complicity with torture.

Professional associations ought to strengthen their commitment to refraining from engaging in any aspect of torture. They should also move to protect whistle-blowers against torture programmes who are members of their association. If the political institutions that are supposed to prevent the practice of torture are not strengthened, cognitive scientists should take collective action to compel intelligence agencies to refrain from torture.

4 Ways Lying Becomes the Norm at a Company

Ron Carucci
Harvard Business Review
Originally posted February 15, 2019

Here is an excerpt:

Unjust accountability systems. When an organization’s processes for measuring employee contributions is perceived as unfair or unjust, we found it is 3.77 times more likely to have people withhold or distort information. We intentionally excluded compensation in our research, because incentive structures can sometimes play disproportionate roles in influencing behavior, and simply looked at how contribution was measured and evaluated through performance management systems, routine feedback processes, and cultural recognition. One interviewee captured a pervasive sentiment about how destructive these systems can be: “I don’t know why I work so hard. My boss doesn’t have a clue what I do. I fill out the appraisal forms at the end of the year, he signs them and sends them to HR. We pretend to have a discussion, and then we start over. It’s a rigged system.” Our study showed that when accountability processes are seen as unfair, people feel forced to embellish their accomplishments and hide, or make excuses for their shortfalls. That sets the stage for dishonest behavior. Research on organizational injustice shows a direct correlation between an employee’s sense of fairness and a conscious choice to sabotage the organization. And more recent research confirms that unfair comparison among employees leads directly to unethical behavior.

Fortunately, our statistical models show that even a 20% improvement in performance management consistency, as evidenced by employees belief that their contributions have been fairly assessed against known standards, can improve truth telling behavior by 12%.

The info is here.

Thursday, March 14, 2019

An ethical pathway for gene editing

Julian Savulescu & Peter Singer
First published January 29, 2019

Ethics is the study of what we ought to do; science is the study of how the world works. Ethics is essential to scientific research in defining the concepts we use (such as the concept of ‘medical need’), deciding which questions are worth addressing, and what we may do to sentient beings in research.

The central importance of ethics to science is exquisitely illustrated by the recent gene editing of two healthy embryos by the Chinese biophysicist He Jiankui, resulting in the birth of baby girls born this month, Lulu and Nana. A second pregnancy is underway with a different couple. To make the babies resistant to human immunodeficiency virus (HIV), He edited out a gene (CCR5) that produces a protein which allows HIV to enter cells. One girl has both copies of the gene modified (and may be resistant to HIV), while the other has only one (making her still susceptible to HIV).

He Jiankui invited couples to take part in this experiment where the father was HIV positive and the mother HIV negative. He offered free in vitro fertilization (IVF) with sperm washing to avoid transmission of HIV. He also offered medical insurance, expenses and treatment capped at 280,000 RMB/CNY, equivalent to around $40,000. The package includes health insurance for the baby for an unspecified period. Medical expenses and compensation arising from any harm caused by the research were capped at 50,000 RMB/CNY ($7000 USD). He says this was from his own pocket. Although the parents were offered the choice of having either gene‐edited or ‐unedited embryos transferred, it is not clear whether they understood that editing was not necessary to protect their child from HIV, nor what pressure they felt under. There has been valid criticism of the process of obtaining informed consent.4 The information was complex and probably unintelligible to lay people.

The info is here.

Actions speak louder than outcomes in judgments of prosocial behavior.

Yudkin, D. A., Prosser, A. M. B., & Crockett, M. J. (2018).
Emotion. Advance online publication.


Recently proposed models of moral cognition suggest that people’s judgments of harmful acts are influenced by their consideration both of those acts’ consequences (“outcome value”), and of the feeling associated with their enactment (“action value”). Here we apply this framework to judgments of prosocial behavior, suggesting that people’s judgments of the praiseworthiness of good deeds are determined both by the benefit those deeds confer to others and by how good they feel to perform. Three experiments confirm this prediction. After developing a new measure to assess the extent to which praiseworthiness is influenced by action and outcome values, we show how these factors make significant and independent contributions to praiseworthiness. We also find that people are consistently more sensitive to action than to outcome value in judging the praiseworthiness of good deeds, but not harmful deeds. This observation echoes the finding that people are often insensitive to outcomes in their giving behavior. Overall, this research tests and validates a novel framework for understanding moral judgment, with implications for the motivations that underlie human altruism.

Wednesday, March 13, 2019

Professional ethics takes a team approach

Richard Kyte
Lacrosse Tribune
Originally posted February 24, 2019

Here is an excerpt:

Why do some professions enjoy consistently high levels of trust while other professions rate low year after year?

Part of the answer may lie in the motivations of individuals within the professions. When I ask nursing students why they want to go into nursing, they invariably respond by saying they want to help others. Business students, by contrast, are more likely to be motivated by self-interest.

But motivation does not fully explain the reputational difference among professions. Most young people who go into ministry or politics also embark upon their careers with pro-social motivations. And my own experience of lawyers, bankers, real estate agents and car salespeople suggests that the individuals in those professions are just as trustworthy as anybody else.

If that is true, then what earns a profession a positive or negative reputation is not just the people in the profession but the way the profession is practiced. Especially important is the way different professions handle ethically problematic cases and circumstances.

The info is here.

Why Sexual Morality Doesn't Exist

Alan Goldman
Originally posted February 12, 2019

There is no such thing as sexual morality per se. Put less dramatically, there is no morality special to sex: no act is wrong simply because of its sexual nature. Sexual morality consists in moral considerations that are relevant elsewhere as well being applied to sexual activity or relations. This is because the proper concept of sexual activity is morally neutral. Sexual activity is that which fulfills sexual desire.  Sexual desire in its primary sense can be defined as desire for physical contact with another person’s body and for the pleasure that such contact brings. Masturbation or desire to view pornography are sexual activity and desire in a secondary sense, substitutes for normal sexual desire in its primary sense. Sex itself is not a moral category, although it places us in relations in which moral considerations apply. It gives us opportunity to do what is otherwise regarded as wrong: to harm, deceive, or manipulate others against their will.

As other philosophers point out, pleasure is normally a byproduct of successfully doing things not aimed at pleasure directly, but this is not the case with sex. Sexual desire aims directly at the pleasure derived from physical contact. Desire for physical contact in other contexts, for example contact sports, is not sexual because it has other motives (winning, exhibiting dominance, etc.), but sexual desire in itself has no other motive. It is not a desire to reproduce or to express love or other emotions, although sexual activity, like other activities, can express various emotions including love.

The info is here.

Tuesday, March 12, 2019

Atlanta child psychologist admits to molesting girl, posting it online

Ben Brasch and Kristal Dixon
The Atlanta Journal-Constitution
Originally posted February 15, 2019

A metro Atlanta psychologist who spent years counseling children will spend two decades in prison after violating the trust of those around him and molesting a pre-teen girl.

In a muffled voice, Jonathan Gersh, 38, admitted in a Cobb County court Friday that he took lewd pictures of the girl, which he put online and were viewed around the world.

Prosecutors said he also took cell phone pictures of children in bathing suits in public and offered to trade photos online. “These pictures are not baseball cards to be traded,” said Judge Steven Schuster.

Gersh’s charges included child molestation and child pornography, but Schuster said “I have a duty to the victim, the victim’s family and society to stop any form of what I perceive to be sex trafficking.”

Defense attorney Richard Grossman had asked the judge for five years in prison. Prosecutors asked for 18 years. Grossman said described his client’s actions as compulsive “voyeurism” and said Gersh “led an exemplary life” as a loving father who offered pro bono counseling to the community.

The info is here.

Sex robots are here, but laws aren’t keeping up with the ethical and privacy issues they raise

Francis Shen
The Conversation
Originally published February 12, 2019

Here is an except:

A brave new world

A fascinating question for me is how the current taboo on sex robots will ebb and flow over time.

There was a time, not so long ago, when humans attracted to the same sex felt embarrassed to make this public. Today, society is similarly ambivalent about the ethics of “digisexuality” – a phrase used to describe a number of human-technology intimate relationships. Will there be a time, not so far in the future, when humans attracted to robots will gladly announce their relationship with a machine?

No one knows the answer to this question. But I do know that sex robots are likely to be in the American market soon, and it is important to prepare for that reality. Imagining the laws governing sexbots is no longer a law professor hypothetical or science fiction.

The info is here.

Monday, March 11, 2019

Steven Mnuchin’s financial disclosures haven’t earned ethics officials’ blessing. What’s the hold-up?

Carrie Levine
The Center for Public Integrity
Originally published March 8, 2019

The executive branch’s chief ethics watchdogs have yet to certify Treasury Secretary Steven Mnuchin’s annual financial disclosure — an unusually lengthy delay in finalizing a document they’ve had for more than eight months.

While the Office of Government Ethics won’t publicly explain the holdup, an analysis of Mnuchin’s disclosure, which was obtained by the Center for Public Integrity, identified entries that outside ethics experts say could be the hitch.

The disclosure statement covers Mnuchin’s 2017 personal finances, and Treasury’s own ethics officials certified it after finding no conflicts of interest.

Some entries on Mnuchin’s 53-page form involve Stormchaser Partners LLC, a film production company owned by Mnuchin’s wife, Louise Linton.

Mnuchin’s ethics agreement, negotiated when he joined the government, required him to step down from the chairmanship of Stormchaser Partners and divest his own ownership interest in it within 90 days of his confirmation in February 2017. (Mnuchin also agreed to divest dozens of other assets that ethics officials said potentially presented conflicts of interest or the appearance of one.)

The info is here. 

The Parking Lot Suicide

Emily Wax-Thibodeaux.
The Washington Post
Originally published February 11, 2019

Here is an excerpt:

Miller was suffering from post-traumatic stress disorder and suicidal thoughts when he checked into the Minneapolis Department of Veterans Affairs hospital in February 2018. After spending four days in the mental-health unit, Miller walked to his truck in VA’s parking lot and shot himself in the very place he went to find help.

“The fact that my brother, Justin, never left the VA parking lot — it’s infuriating,” said Harrington, 37. “He did the right thing; he went in for help. I just can’t get my head around it.”

A federal investigation into Miller’s death found that the Minneapolis VA made multiple errors: not scheduling a follow-up appointment, failing to communicate with his family about the treatment plan and inadequately assessing his access to firearms.

Several days after his death, Miller’s parents received a package from the Department of Veterans Affairs — bottles of antidepressants and sleep aids prescribed to Miller.

His death is among 19 suicides that occurred on VA campuses from October 2017 to November 2018, seven of them in parking lots, according to the Department of Veterans Affairs.

While studies show that every suicide is highly complex — influenced by genetics, financial uncertainty, relationship loss and other factors — mental-health experts worry that veterans taking their lives on VA property has become a desperate form of protest against a system that some veterans feel hasn’t helped them.

The most recent parking lot suicide occurred weeks before Christmas in St. Petersburg, Fla. Marine Col. Jim Turner, 55, dressed in his uniform blues and medals, sat on top of his military and VA records and killed himself with a rifle outside the Bay Pines Department of Veterans Affairs.

“I bet if you look at the 22 suicides a day you will see VA screwed up in 90%,” Turner wrote in a note investigators found near his body.

The info is here.

Sunday, March 10, 2019

Rethinking Medical Ethics

Insights Team
Originally posted February 11, 2019

Here is an excerpt:

In June 2018, the American Medical Association (AMA) issued its first guidelines for how to develop, use and regulate AI. (Notably, the association refers to AI as “augmented intelligence,” reflecting its belief that AI will enhance, not replace, the work of physicians.) Among its recommendations, the AMA says, AI tools should be designed to identify and address bias and avoid creating or exacerbating disparities in the treatment of vulnerable populations. Tools, it adds, should be transparent and protect patient privacy.

None of those recommendations will be easy to satisfy. Here is how medical practitioners, researchers, and medical ethicists are approaching some of the most pressing ethical challenges.

Avoiding Bias

In 2017, the data analytics team at University of Chicago Medicine (UCM) used AI to predict how long a patient might stay in the hospital. The goal was to identify patients who could be released early, freeing up hospital resources and providing relief for the patient. A case manager would then be assigned to help sort out insurance, make sure the patient had a ride home, and otherwise smooth the way for early discharge.

In testing the system, the team found that the most accurate predictor of a patient’s length of stay was his or her ZIP code. This immediately raised red flags for the team: ZIP codes, they knew, were strongly correlated with a patient’s race and socioeconomic status. Relying on them would disproportionately affect African-Americans from Chicago’s poorest neighborhoods, who tended to stay in the hospital longer. The team decided that using the algorithm to assign case managers would be biased and unethical.

The info is here.

Saturday, March 9, 2019

Can AI Help Reduce Disparities in General Medical and Mental Health Care?

Irene Y. Chen, Peter Szolovits, and Marzyeh Ghassemi
AMA J Ethics. 2019;21(2):E167-179.
doi: 10.1001/amajethics.2019.167.


Background: As machine learning becomes increasingly common in health care applications, concerns have been raised about bias in these systems’ data, algorithms, and recommendations. Simply put, as health care improves for some, it might not improve for all.

Methods: Two case studies are examined using a machine learning algorithm on unstructured clinical and psychiatric notes to predict intensive care unit (ICU) mortality and 30-day psychiatric readmission with respect to race, gender, and insurance payer type as a proxy for socioeconomic status.

Results: Clinical note topics and psychiatric note topics were heterogenous with respect to race, gender, and insurance payer type, which reflects known clinical findings. Differences in prediction accuracy and therefore machine bias are shown with respect to gender and insurance type for ICU mortality and with respect to insurance policy for psychiatric 30-day readmission.

Conclusions: This analysis can provide a framework for assessing and identifying disparate impacts of artificial intelligence in health care.

Friday, March 8, 2019

Seven moral rules found all around the world

University of Oxford
Originally released February 12, 2019

Anthropologists at the University of Oxford have discovered what they believe to be seven universal moral rules.

The rules: help your family, help your group, return favours, be brave, defer to superiors, divide resources fairly, and respect others' property. These were found in a survey of 60 cultures from all around the world.

Previous studies have looked at some of these rules in some places – but none has looked at all of them in a large representative sample of societies. The present study, published in Current Anthropology, is the largest and most comprehensive cross-cultural survey of morals ever conducted.

The team from Oxford's Institute of Cognitive & Evolutionary Anthropology (part of the School of Anthropology & Museum Ethnography) analysed ethnographic accounts of ethics from 60 societies, comprising over 600,000 words from over 600 sources.

Dr. Oliver Scott Curry, lead author and senior researcher at the Institute for Cognitive and Evolutionary Anthropology, said: "The debate between moral universalists and moral relativists has raged for centuries, but now we have some answers. People everywhere face a similar set of social problems, and use a similar set of moral rules to solve them. As predicted, these seven moral rules appear to be universal across cultures. Everyone everywhere shares a common moral code. All agree that cooperating, promoting the common good, is the right thing to do."

The study tested the theory that morality evolved to promote cooperation, and that – because there are many types of cooperation – there are many types of morality. According to this theory of 'morality as cooperation," kin selection explains why we feel a special duty of care for our families, and why we abhor incest. Mutualism explains why we form groups and coalitions (there is strength and safety in numbers), and hence why we value unity, solidarity, and loyalty. Social exchange explains why we trust others, reciprocate favours, feel guilt and gratitude, make amends, and forgive. And conflict resolution explains why we engage in costly displays of prowess such as bravery and generosity, why we defer to our superiors, why we divide disputed resources fairly, and why we recognise prior possession.

The information is here.

Is It Good to Cooperate? Testing the Theory of Morality-as-Cooperation in 60 Societies

Oliver Scott Curry, Daniel Austin Mullins, and Harvey Whitehouse
Current Anthropology
The paper is here.


What is morality? And to what extent does it vary around the world? The theory of “morality-as-cooperation” argues that morality consists of a collection of biological and cultural solutions to the problems of cooperation recurrent in human social life. Morality-as-cooperation draws on the theory of non-zero-sum games to identify distinct problems of cooperation and their solutions, and it predicts that specific forms of cooperative behavior—including helping kin, helping your group, reciprocating, being brave, deferring to superiors, dividing disputed resources, and respecting prior possession—will be considered morally good wherever they arise, in all cultures. To test these predictions, we investigate the moral valence of these seven cooperative behaviors in the ethnographic records of 60 societies. We find that the moral valence of these behaviors is uniformly positive, and the majority of these cooperative morals are observed in the majority of cultures, with equal frequency across all regions of the world. We conclude that these seven cooperative behaviors are plausible candidates for universal moral rules, and that morality-as-cooperation could provide the unified theory of morality that anthropology has hitherto lacked.

Thursday, March 7, 2019

Prominent psychiatrist accused of sexually exploiting patients

Michael Rezendes
The Boston Globe
Originally posted February 21, 2019

A prominent North Shore psychiatrist is facing lawsuits from three female patients who say he lured them into degrading sexual relationships, including beatings, conversations about bondage, and, in one case, getting a tattoo of the doctor’s initials to show his “ownership” of her, according to court documents.

The women allege that Dr. Keith Ablow, an author who was a contributor to Fox News network until 2017, abused his position while treating them for acute depression, leaving them unable to trust authority figures and plagued with feelings of shame and self-recrimination.

“He began to hit me when we engaged in sexual activities,” wrote one plaintiff, a New York woman, in a sworn affidavit filed with her lawsuit. “He would have me on my knees and begin to beat me with his hands on my breasts,” she wrote, “occasionally saying, ‘I own you,’ or ‘You are my slave.’”

The malpractice lawsuits, two of them filed on Thursday in Essex Superior Court and a third filed last year, paint a picture of a therapist who encouraged women to trust and rely on him, then coaxed them into humiliating sexual activities, often during treatment sessions for which they were charged.

When the New York woman had trouble paying her therapy bills, she said, Ablow advised her to work as an escort or stripper because the work was lucrative.

Although the women used their real names in their lawsuits, the Globe is withholding their identities at their request.  The Globe does not identify alleged victims of sexual abuse without their consent.

The info is here.

Supreme Court should adopt an ethics code

Robert H. Tembeckjian
Special to the Washington Post
Originally published February 23, 2019

During the contentious Supreme Court confirmation process for Brett Kavanaugh, and soon after he was confirmed on Oct. 6, dozens of ethics complaints against him were filed. All were dismissed on Dec. 18 by a federal judicial review panel, without investigation, because once Kavanaugh was elevated to the Supreme Court, he became immune to ethics oversight that applies to judges in lower courts.

Allegations that the review panel had deemed “serious” – that Kavanaugh had testified falsely during his confirmation hearings about his personal conduct and about his activities in the White House under President George W. Bush, and that he had displayed partisan bias and a lack of judicial temperament – went into ethical limbo.

The fate of the Kavanaugh complaints seems to have stirred House Democrats to action: The first bill introduced in the 116th Congress, H.R. 1, includes, along with provisions for voting rights and campaign finance reform, a measure to require the development of a judicial code of ethics that would apply to all federal judges, including those on the Supreme Court.

Chief Justice John Roberts is on the record as opposing such a move. In 2011, he addressed it at some length in his year-end report on the federal judiciary. Roberts argued that the justices already adhere informally to some ethical strictures, and that the separation-of-powers doctrine precludes Congress from imposing such a mandate on the Supreme Court.

Roberts’ statement didn’t deter Rep. Louise Slaughter, D-N.Y., from introducing legislation in 2013 and in subsequent sessions that would impose a code of ethics on the Supreme Court. Slaughter died last year. Her proposals never gained traction in Congress, and the current incarnation of the idea probably faces a steep challenge, with Republicans controlling the Senate and Democrats controlling the House.

The info is here.

Wednesday, March 6, 2019

A Pedophile Doctor Drew Suspicions for 21 Years. No One Stopped Him.

Christopher Weaver, Dan Frosch and Gabe Johnson
The Wall Street Journal
Originally posted February 8, 2019

Here is an excerpt:

An investigation by The Wall Street Journal and the PBS series Frontline found the IHS repeatedly missed or ignored warning signs, tried to silence whistleblowers and allowed Mr. Weber to continue treating children despite the suspicions of colleagues up and down the chain of command.

The investigation also found that the agency tolerated a number of problem doctors because it was desperate for medical staff, and that managers there believed they might face retaliation if they followed up on suspicions of abuse. The federal agency has long been criticized for providing inadequate care to Native Americans.

After a tribal prosecutor outside of the IHS finally investigated his crimes, Mr. Weber was indicted in 2017 and 2018 for sexually assaulting six patients in Montana and South Dakota. Court documents and interviews with former patients show that Mr. Weber plied teen boys with money, alcohol and sometimes opioids, and coerced them into oral and anal sex with him in hospital exam rooms and at his government housing unit.

“IHS, the local here, they want to just forget it happened,” said Pauletta Red Willow, a social-services worker on the Pine Ridge reservation. “You can’t ever forget how someone did our children wrong and affected us for generations to come.”

The info is here.

Stanford investigates links to scientist in baby gene-editing scandal

Guardian staff and agencies
The Guardian
Originally posted February 7, 2019

Stanford University has begun an investigation following claims some of its staff knew long ago of Chinese scientist He Jiankui’s plans to create the world’s first gene-edited babies.

A university official said a review was under way of interactions some faculty members had with He, who was educated at Stanford. Several professors including He’s former research adviser have said they knew or strongly suspected He wanted to try gene editing on embryos intended for pregnancy.

The genetic scientist sparked global outcry after he claimed in a video posted on YouTube in November 2018 that he had used the gene-editing tool Crispr-Cas9 to modify a particular gene in two embryos before they were placed in their mother’s womb. He – who works from a lab in the southern Chinese city of Shenzhen – said the twin girls, known as Lulu and Nana, were born through regular IVF but using an egg that was modified before being inserted into the womb. He focused on HIV infection prevention because the girls’ father is HIV positive.

The info is here.

Tuesday, March 5, 2019

Former Ethics Chief Blasts Groups for Holding Events at Trump Hotel

Charles Clark
Originally posted March 4, 2019

Here is an excerpt:

“How many members of Congress, who have a constitutional duty to conduct meaningful oversight of the executive, giddily participate in events at the Trump International Hotel, a taxpayer owned landmark where Trump is his own landlord and the emoluments flow like the $35 martinis?” Shaub wrote.

The criticism of Kuwait was prompted by a letter tweeted earlier by Rep. Ted Lieu, D-Calif. Kuwait's ambassador to Washington, Salem Abdullah Al-Jaber Al-Sabah, had invited Lieu to the February celebration of Kuwait’s 58th National Day and 28th Liberation Day.

Lieu wrote the ambassador on Feb. 11 saying that while he looked forward to a continuing productive partnership, “Regrettably, the event will take place at the Trump International Hotel, which is owned by the President of the United States. I must therefore decline your invitation, as the Emoluments Clause of the U.S. Constitution (Article 1, Section 9, Paragraph 8) stipulates that no federal officeholders shall receive gifts or payments from foreign state or rulers without the consent of Congress.”

Lieu then warned the embassy that the issue raises “serious ethical and legal questions,” and that continuing to hold events “could amount to a violation of the U.S. Constitution.”

The info is here.

Call for retraction of 400 scientific papers amid fears organs came from Chinese prisoners

Melissa Davey
The Guardian
Originally published February 5, 2019

A world-first study has called for the mass retraction of more than 400 scientific papers on organ transplantation, amid fears the organs were obtained unethically from Chinese prisoners.

The Australian-led study exposes a mass failure of English language medical journals to comply with international ethical standards in place to ensure organ donors provide consent for transplantation.

The study was published on Wednesday in the medical journal BMJ Open. Its author, the professor of clinical ethics Wendy Rogers, said journals, researchers and clinicians who used the research were complicit in “barbaric” methods of organ procurement.

“There’s no real pressure from research leaders on China to be more transparent,” Rogers, from Macquarie University in Sydney, said. “Everyone seems to say, ‘It’s not our job’. The world’s silence on this barbaric issue must stop.”

A report published in 2016 found a large discrepancy between official transplant figures from the Chinese government and the number of transplants reported by hospitals. While the government says 10,000 transplants occur each year, hospital data shows between 60,000 to 100,000 organs are transplanted each year. The report provides evidence that this gap is being made up by executed prisoners of conscience.

The info is here.

Monday, March 4, 2019

Suicide rates at a record high, yet insurers still deny care

Patrick Kennedy and Jim Ramstad
Originally posted February 15, 2019

Here is an excerpt:

A recent report from the Centers for Disease Control and Prevention (CDC) reinforces the seriousness of our nation’s mental health crisis. Life expectancy is declining in a way we haven’t seen since World War. With more than 70,000 drug overdose deaths in 2017 and suicides increasing by 33 percent since 1999, the message is clear: People are not getting the care they need. And for many, it’s a simple matter of access.

When the Mental Health Parity and Addiction Equity Act, also known as the Federal Parity Law, passed in 2008, those of us who drafted and championed the bill knew that talking about mental health wasn’t enough — we needed to ensure access to care as well. Hence, the Federal Parity Law requires most insurers to cover illnesses of the brain, such as depression or addiction, no more restrictively than illnesses of the body, such as diabetes or cancer. We hoped it would remove the barriers that families like Sylvia’s often face when trying to get help.

It has been 10 years since the law passed and, unfortunately, too many Americans are still being denied coverage for mental health and addiction treatment. The reason? A lack of enforcement.

As things stand, the responsibility to challenge inadequate systems of care and illegal denials falls on patients, who are typically unaware of the law or are in the middle of a personal crisis. This isn’t right. Or sustainable. The responsibility for mental health equity should lie with insurers, not with patients or their providers. Insurers should be held accountable for parity before plans are sold.

The info is here.

Mental hospital accused of holding Texas patients against their will has filed for bankruptcy

Sarah Sarder
Originally posted February 12, 2019

A North Texas mental health institution with hospitals in Garland, Fort Worth and Arlington has filed for bankruptcy months after being indicted in a criminal case over illegally detaining patients.

Sundance Behavioral Healthcare System filed for Chapter 11 bankruptcy Thursday. The system faces 20 charges of violating state mental health codes after being indicted in November and December.

In December, Sundance stopped accepting patients at its hospitals and “voluntarily brought its patient count to zero,” its attorneys said.

The corporation announced that it had surrendered its license on Dec. 21 to the state. Attorneys said the hospital could not financially sustain its services in light of the court proceedings.

The info is here.

Sunday, March 3, 2019

When and why people think beliefs are “debunked” by scientific explanations for their origins

Dillon Plunkett, Lara Buchak, and Tania Lombrozo


How do scientific explanations for beliefs affect people’s confidence in those beliefs? For example, do people think neuroscientific explanations for religious belief support or challenge belief in God? In five experiments, we find that the effects of scientific explanations for belief depend on whether the explanations imply normal or abnormal functioning (e.g., if a neural mechanism is doing what it evolved to do). Experiments 1 and 2 find that people think brain based explanations for religious, moral, and scientific beliefs corroborate those beliefs when the explanations invoke a normally functioning mechanism, but not an abnormally functioning mechanism. Experiment 3 demonstrates comparable effects for other kinds of scientific explanations (e.g., genetic explanations). Experiment 4 confirms that these effects derive from (im)proper functioning, not statistical (in)frequency. Experiment 5 suggests that these effects interact with people’s prior beliefs to produce motivated judgments: People are more skeptical of scientific explanations for their own beliefs if the explanations appeal to abnormal functioning, but they are less skeptical of scientific explanations of opposing beliefs if the explanations appeal to abnormal functioning. These findings suggest that people treat “normality” as a proxy for epistemic reliability and reveal that folk epistemic commitments shape attitudes towards scientific explanations.

The research is here.

Saturday, March 2, 2019

Serious Ethical Violations in Medicine: A Statistical and Ethical Analysis of 280 Cases in the United States From 2008–2016

James M. DuBois, Emily E. Anderson, John T. Chibnall, Jessica Mozersky & Heidi A. Walsh (2019) The American Journal of Bioethics, 19:1, 16-34.
DOI: 10.1080/15265161.2018.1544305


Serious ethical violations in medicine, such as sexual abuse, criminal prescribing of opioids, and unnecessary surgeries, directly harm patients and undermine trust in the profession of medicine. We review the literature on violations in medicine and present an analysis of 280 cases. Nearly all cases involved repeated instances (97%) of intentional wrongdoing (99%), by males (95%) in nonacademic medical settings (95%), with oversight problems (89%) and a selfish motive such as financial gain or sex (90%). More than half of cases involved a wrongdoer with a suspected personality disorder or substance use disorder (51%). Despite clear patterns, no factors provide readily observable red flags, making prevention difficult. Early identification and intervention in cases requires significant policy shifts that prioritize the safety of patients over physician interests in privacy, fair processes, and proportionate disciplinary actions. We explore a series of 10 questions regarding policy, oversight, discipline, and education options. Satisfactory answers to these questions will require input from diverse stakeholders to help society negotiate effective and ethically balanced solutions.

Friday, March 1, 2019

Ex-Bush ethics chief: GOP lawmaker 'should be arrested' for witness tampering

Aris Folley
Originally posted February 27, 2019

Richard Painter, the former chief ethics lawyer for the George W. Bush administration, called for the speedy arrest of Rep. Matt Gaetz (R-Fla.), accusing him of witness tampering hours after he issued what many perceived to be a threatening tweet directed at Michael Cohen on the eve of Cohen's public congressional testimony.

Gaetz drew sharp backlash on Tuesday after posting a tweet, which has since been deleted, that suggested Cohen had not been faithful to his wife and questioned whether his wife would remain faithful to him while he serves time in prison.


Gaetz later issued an apology for the tweet after a number of legal experts and Democrats suggested the post may constitute witness tampering.

Gaetz sought to clarify that it was not his “intent to threaten” Cohen in his earlier tweet and added that “he should have chosen words that better showed my intent.”

The info is here.

Editor's Note: I guess I should not be shocked that nearly one thousand people retweeted a threat at time of this screen capture.  There were more.  Tribalism.......

Once-prominent 'conversion therapist' will now 'pursue life as a gay man'

Julie Compton
NBC News
Originally posted January 23, 2019

David Matheson, a once prominent Mormon “conversion therapist” who claims to have helped some gay men remain in heterosexual marriages, is looking for a boyfriend.

The revelation broke Sunday night after the LGBTQ nonprofit Truth Wins Out obtained a private Facebook post made by “conversion therapy” advocate Rich Wyler, which stated that Matheson “says that living a single, celibate life ‘just isn’t feasible for him,’ so he’s seeking a male partner.”

Matheson then confirmed Wyler’s assertions on Tuesday with a Facebook post of his own. “A year ago I realized I had to make substantial changes in my life. I realized I couldn’t stay in my marriage any longer. And I realized that it was time for me to affirm myself as gay,” he wrote.

Matheson, who was married to a woman for 34 years and is now divorced, also confirmed in an interview with NBC News that he is now dating men.

The info is here.