Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, September 30, 2019

Connecting the dots on the origins of social knowledge

Arber Tasimi
in press, Perspectives on Psychological Science

Abstract

Understanding what infants know about social life is a growing enterprise. Indeed, one of the most exciting developments within psychological science over the past decade is the view that infants may come equipped with knowledge about “good” and “bad,” and about “us” and “them.” At the heart of this view is a seminal set of studies indicating that infants prefer helpers to hinderers and similar to dissimilar others. What a growing number of researchers now believe is that these preferences may be based on innate (i.e., unlearned) social knowledge. Here I consider how decades of research in developmental psychology can lead to a different way to make sense of this popular body of work. As I make connections between old observations and new theorizing––and between classic findings and contemporary research––I consider how the same preferences that are thought to emanate from innate social knowledge may, instead, reflect social knowledge that infants can rapidly build as they pursue relationships with their caregivers.  I offer this perspective with hopes that it will inspire future work that supports or questions the ideas sketched out here and, by doing so, will broaden an understanding of the origins of social knowledge.

The paper is here.

An Admissions Group Is Scrambling to Delete Parts of Its Ethical Code. That Could Mean Big Changes for Higher Ed.

Grace Elletson
The Chronicle of Higher Education
Originally published August 30, 2019

Here is an excerpt:

A handful of provisions are at issue. One prohibits colleges from offering incentives, like special housing or better financial-aid packages, only to students who use an early-decision application.

Another says colleges can’t recruit or offer enrollment to students who are already enrolled or have submitted deposits to other colleges. Under the NACAC ethics code, May 1 is when commitments by those students are made final, and colleges must respect that deadline.

Another states that colleges cannot solicit transfer applications from a previous applicant or prospect unless that student inquired about transferring.

According to a document sent to NACAC members, the Justice Department believes “that these provisions restrain competition among colleges” and that, if they are removed, thus allowing for more competition, the result “may lower” college costs if colleges can solicit students who have already committed.

If the provisions are removed, the changes will be significant, and turmoil in admissions offices should be expected, said Jon Boeckenstedt, vice provost for enrollment management at Oregon State University.

Removing those parts of the ethical code would allow institutions to recruit students from competitor colleges even after they’ve committed, and to see their own students get poached, he said.

The changes could cause colleges to enter into a precarious dance — keep students committed and simultaneously recruit others, all year long.

Given the uncertainty that the changes would cause for enrollment predictions, especially for smaller, tuition-dependent colleges, higher education’s landscape will be upended, Boeckenstedt said.

The info is here.

Sunday, September 29, 2019

The brain, the criminal and the courts

A graph shows the number of mentions of neuroscience in judicial opinions in US cases from 2005 to 2015. Capital and noncapital homicides are shown, as well as other felonies. For the three categories added together, the authors found 101 mentions in 2005 and more than 400 in 2015. All three categories show growth.Eryn Brown
knowablemagazine.org
Originally posted August 30, 2019

Here is an excerpt:

It remains to be seen if all this research will yield actionable results. In 2018, Hoffman, who has been a leader in neurolaw research, wrote a paper discussing potential breakthroughs and dividing them into three categories: near term, long term and “never happening.” He predicted that neuroscientists are likely to improve existing tools for chronic pain detection in the near future, and in the next 10 to 50 years he believes they’ll reliably be able to detect memories and lies, and to determine brain maturity.

But brain science will never gain a full understanding of addiction, he suggested, or lead courts to abandon notions of responsibility or free will (a prospect that gives many philosophers and legal scholars pause).

Many realize that no matter how good neuroscientists get at teasing out the links between brain biology and human behavior, applying neuroscientific evidence to the law will always be tricky. One concern is that brain studies ordered after the fact may not shed light on a defendant’s motivations and behavior at the time a crime was committed — which is what matters in court. Another concern is that studies of how an average brain works do not always provide reliable information on how a specific individual’s brain works.

“The most important question is whether the evidence is legally relevant. That is, does it help answer a precise legal question?” says Stephen J. Morse, a scholar of law and psychiatry at the University of Pennsylvania. He is in the camp who believe that neuroscience will never revolutionize the law, because “actions speak louder than images,” and that in a legal setting, “if there is a disjunct between what the neuroscience shows and what the behavior shows, you’ve got to believe the behavior.” He worries about the prospect of “neurohype,” and attorneys who overstate the scientific evidence.

The info is here.

Saturday, September 28, 2019

Morality as a Basic Psychological Need

Prentice, M., Jayawickreme, E., Hawkins, A.,
Hartley, A., Furr, R. M., & Fleeson, W. (2019). 
Social Psychological and Personality Science, 10(4), 449–460. https://doi.org/10.1177/1948550618772011

Abstract

We investigate the long-standing yet understudied assumption that feeling moral is a basic psychological need, perhaps like the needs to feel autonomous, competent, and related (ACR). We report an empirical “entrance exam” on whether morality should be considered a need. Specifically, we applied to morality a pioneering method from which Sheldon and colleagues provided evidence that ACR are basic psychological needs. In two studies and four samples, participants recalled events in which they felt un/satisfied, meaningful, pleasurable, at their best, and at their worst. They rated how much candidate psychological needs were satisfied during them. Morality was frequently as or more satisfied than ACR during peak events. Further, it was positively related to indices of positive functioning. These findings suggest feelings of being moral may help people identify times when life is going well. Further, they suggest that morality may be a fundamental psychological need and warrants further investigation.

Conclusion

That people have a need to feel moral is a classic psychological notion, and such a need seems integral to explaining the development and maintenance of human moral cognition and behavior.  Despite this, such a need has remained somewhat controversial for mainstream psychological science. We demonstrate that morality meets many of the criteria set out by Baumeister and Leary (1995). More broadly, we see that morality provides important information about whether people’s lives are going well. This work provides a basis for a more prominent position of the moral need in future research.

Friday, September 27, 2019

Empathy choice in physicians and non-physicians

Daryl Cameron and Michael Inzlicht
PsyArXiv
Originally created on September 11, 2019

Abstract

Empathy in medical care has been one of the focal points in the debate over the bright and dark sides of empathy. Whereas physician empathy is sometimes considered necessary for better physician-patient interactions, and is often desired by patients, it also has been described as a potential risk for exhaustion among physicians who must cope with their professional demands of confronting acute and chronic suffering. The present study compared physicians against demographically matched non-physicians on a novel behavioral assessment of empathy, in which they choose between empathizing or remaining detached from suffering targets over a series of trials. Results revealed no statistical differences between physicians and non-physicians in their empathy avoidance, though physicians were descriptively more likely to choose empathy. Additionally, both groups were likely to perceive empathy as cognitively challenging, and perceived cognitive costs of empathy associated with empathy avoidance. Across groups, there were also no statistically significant differences in self-reported trait empathy measures and empathy-related motivations and beliefs. Overall, these results suggest that physicians and non-physicians were more similar than different in terms of their empathic choices and in their assessments of the costs and benefits of empathy for others.

Conclusion:

In summary, do physicians choose empathy, and should they do so?  We find that physicians do not how a clear preference to approach or avoid empathy.  Nevertheless, they do perceive empathy to be cognitively taxing, entailing effort, aversiveness, and feelings of inefficacy, and these perceptions associated with reduced empathy choice.  Physicians who derived more satisfaction and less burnout from helping were more likely to choose empathy, and so too if they believed that empathy is good, and useful, for medical practice.  More generally, in the current work, physicians did not show statistically meaningful differences from demographically matched controls in trait empathy, empathy regulation behavior, motivations to approach or avoid empathy, or beliefs about empathy’s use for medicine.  Although it has often been suggested that physicians exhibit different levels of empathy due to the demands of medical care, the current results suggest that physicians are much like everyone else, sensitive to the relevant costs and benefits of empathizing.

The research is here.

Nudging Humans

Brett M. Frischmann
Villanova University - School of Law
Originally published August 1, 2019

Abstract

Behavioral data can and should inform the design of private and public choice architectures. Choice architects should steer people toward outcomes that make them better off (according to their own interests, not the choice architects’) but leave it to the people being nudged to choose for themselves. Libertarian paternalism can and should provide ethical constraints on choice architects. These are the foundational principles of nudging, the ascendant social engineering agenda pioneered by Nobel Prize winning economist Richard Thaler and Harvard law professor Cass Sunstein.

The foundation bears tremendous weight. Nudging permeates private and public institutions worldwide. It creeps into the design of an incredible number of human-computer interfaces and affects billions of choices daily. Yet the foundation has deep cracks.

This critique of nudging exposes those hidden fissures. It aims at the underlying theory and agenda, rather than one nudge or another, because that is where micro meets macro, where dynamic longitudinal impacts on individuals and society need to be considered. Nudging theorists and practitioners need to better account for the longitudinal effects of nudging on the humans being nudged, including malleable beliefs and preferences as well as various capabilities essential to human flourishing. The article develops two novel and powerful criticisms of nudging, one focused on nudge creep and another based on normative myopia. It explores these fundamental flaws in the nudge agenda theoretically and through various examples and case studies, including electronic contracting, activity tracking in schools, and geolocation tracking controls on an iPhone.

The paper is here.

Thursday, September 26, 2019

Business and the Ethical Implications of Technology

Martin, K., Shilton, K. & Smith, J.
J Bus Ethics (2019).
https://doi.org/10.1007/s10551-019-04213-9

Abstract

While the ethics of technology is analyzed across disciplines from science and technology studies (STS), engineering, computer science, critical management studies, and law, less attention is paid to the role that firms and managers play in the design, development, and dissemination of technology across communities and within their firm. Although firms play an important role in the development of technology, and make associated value judgments around its use, it remains open how we should understand the contours of what firms owe society as the rate of technological development accelerates. We focus here on digital technologies: devices that rely on rapidly accelerating digital sensing, storage, and transmission capabilities to intervene in human processes. This symposium focuses on how firms should engage ethical choices in developing and deploying these technologies. In this introduction, we, first, identify themes the symposium articles share and discuss how the set of articles illuminate diverse facets of the intersection of technology and business ethics. Second, we use these themes to explore what business ethics offers to the study of technology and, third, what technology studies offers to the field of business ethics. Each field brings expertise that, together, improves our understanding of the ethical implications of technology. Finally we introduce each of the five papers, suggest future research directions, and interpret their implications for business ethics.

The Introduction is here.

There are several other articles related to this introduction.

Patients don't think payers, providers can protect their data, survey finds

healthcare data analyticsPaige Minemyer
Fierce Healthcare
Originally published on August 26, 2019

Patients are skeptical of healthcare industry players’ ability to protect their data—and believe health insurers to be the worst at doing so, a new survey shows.

Harvard T.H. Chan School of Public Health and Politico surveyed 1,009 adults in mid-July and found that just 17% have a “great deal” of faith that their health plan will protect their data.

By contrast, 24% said they had a “great deal” of trust in their hospital to protect their data, and 34% said the same about their physician’s office. In addition, 22% of respondents said they had “not very much” trust in their insurer to protect their data, and 17% said they had no trust at all.

The firms that fared the worst on the survey, however, were online search engines and social media sites. Only 7% said they have a “great deal” of trust in search engines such as Google to protect their data, and only 3% said the same about social media platforms.

The info is here.

Wednesday, September 25, 2019

Suicide rates climbing, especially in rural America

Misti Crane
Ohio State News
Originally published September 6, 2019

Suicide is becoming more common in America, an increase most pronounced in rural areas, new research has found.

The study, which appears online today (Sept. 6, 2019) in the journal JAMA Network Open, also highlights a cluster of factors, including lack of insurance and the prevalence of gun shops, that are associated with high suicide rates.

Researchers at The Ohio State University evaluated national suicide data from 1999 to 2016, and provided a county-by-county national picture of the suicide toll among adults. Suicide rates jumped 41 percent, from a median of 15 per 100,000 county residents in the first part of the study to 21.2 per 100,000 in the last three years of the analysis. Suicide rates were highest in less-populous counties and in areas where people have lower incomes and fewer resources. From 2014 through 2016, suicide rates were 17.6 per 100,000 in large metropolitan counties compared with 22 per 100,000 in rural counties.

In urban areas, counties with more gun shops tended to have higher suicide rates. Counties with the highest suicide rates were mostly in Western states, including Colorado, New Mexico, Utah and Wyoming; in Appalachian states including Kentucky, Virginia and West Virginia; and in the Ozarks, including Arkansas and Missouri.

The info is here.

Not lost in translation: Successfully replicating Prospect Theory in 19 countries

Kai Ruggeri and others
OSF Preprints
Originally posted August 21, 2019

Abstract

Kahneman and Tversky’s 1979 article on Prospect Theory is one of the most influential papers across all of the behavioural sciences. The study tested a series of binary financial (risky) choices, ultimately concluding that judgments formed under uncertainty deviate significantly from those presumed by expected utility theory, which was the prevailing theoretical construct at the time. In the forty years since publication, this study has had a remarkable impact on science, policy, and other real-world applications. At the same time, a number of critiques have been raised about its conclusions and subsequent constructs that were founded on it, such as loss aversion. In an era where such presumed canonical theories have increasingly drawn scrutiny for inability to replicate, we attempted a multinational study of N = 4,099 participants from 19 countries and 13 languages. The same methods and procedures were used as in the original paper, adjusting only currencies to make them relative to current values, and requiring all participants to respond to all items. Overall, we found that results replicated for 94% of the 17 choice items tested. At most, results from the 1979 study were attenuated in our findings, which is most likely due to a more robust sample. Twelve of the 13 theoretical contrasts presented by Kahneman and Tversky also replicated, with a further 89% replication rate of the total contrasts possible when separating by location, up to 100% replication in some countries. We conclude that the principles of Prospect Theory replicate beyond any reasonable thresholds, and provide a number of important insights about replications, attenuation, and implications for the study of human decision-making at population-level.

The research is here.

Tuesday, September 24, 2019

Pentagon seeks 'ethicist' to oversee military artificial intelligence

A prototype robot goes through its paces at the Defense Advanced Research Projects Agency (Darpa) Robotics Challenge in Pomona, California, in 2015.David Smith
The Guardian
Originally posted September 7, 2019

Wanted: military “ethicist”. Skills: data crunching, machine learning, killer robots. Must have: cool head, moral compass and the will to say no to generals, scientists and even presidents.

The Pentagon is looking for the right person to help it navigate the morally murky waters of artificial intelligence (AI), billed as the battlefield of the 21st century.

“One of the positions we are going to fill will be someone who’s not just looking at technical standards, but who’s an ethicist,” Lt Gen Jack Shanahan, director of the Joint Artificial Intelligence Center (JAIC) at the US defense department, told reporters last week.

“I think that’s a very important point that we would not have thought about this a year ago, I’ll be honest with you. In Maven [a pilot AI machine learning project], these questions really did not rise to the surface every day, because it was really still humans looking at object detection, classification and tracking. There were no weapons involved in that.”

Shanahan added: “So we are going to bring in someone who will have a deep background in ethics and then, with the lawyers within the department, we’ll be looking at how do we actually bake this into the future of the Department of Defense.”

The JAIC is a year old and has 60 employees. Its budget last year was $93m; this year’s request was $268m. Its focus comes amid fears that China has gained an early advantage in the global race to explore AI’s military potential, including for command and control and autonomous weapons.

Cruel, Immoral Behavior Is Not Mental Illness

gun violence, mental disordersJames L. Knoll & Ronald W. Pies
Psychiatric Times
Originally posted August 19, 2019

Here is an excerpt:

Another way of posing the question is to ask—Does immoral, callous, cruel, and supremely selfish behaviors constitute a mental illness? These socially deviant traits appear in those with and without mental illness, and are widespread in the general population. Are there some perpetrators suffering from a genuine psychotic disorder who remain mentally organized enough to carry out these attacks? Of course, but they are a minority. To further complicate matters, psychotic individuals can also commit violent acts that were motivated by base emotions (resentment, selfishness, etc.), while their psychotic symptoms may be peripheral or merely coincidental.

It bears repeating that reliable, clinically-based data or complete psychological autopsies on perpetrators of mass public shootings are very difficult to obtain. That said, some of the best available research on mass public shooters indicates that they often display “rigidness, hostility, or extreme self-centeredness.” A recent FBI study found that only 25% of mass shooters had ever had a mental illness diagnosis, and only 3 of these individuals had a diagnosis of a psychotic disorder. The FBI’s cautionary statement in this report is incisive: “. . . formally diagnosed mental illness is not a very specific predictor of violence of any type, let alone targeted violence…. declarations that all active shooters must simply be mentally ill are misleading and unhelpful."

Psychiatric and mental health treatment has its limits, and is not traditionally designed to detect and uncover budding violent extremists. It is designed to work together with individuals who are invested in their own mental health and seek to increase their own degrees of freedom in life in a pro-social manner. This is why calls for more mental health laws or alterations in civil commitment laws are likely to be low-yield at best, with respect to preventing mass killing—and stagnating to mental health progress at worst.

The info is here.

Monday, September 23, 2019

Ohio medical board knew late doctor was sexually assaulting his male patients, but did not remove his license, report says

Image result for richard strauss ohio state
Richard Strauss
Laura Ly
CNN.com
Originally posted August 30, 2019

Dr. Richard Strauss is believed to have sexually abused at least 177 students at Ohio State University when he worked there between 1978 and 1998. A new investigation has found that the State Medical Board of Ohio knew about the abuse by the late doctor but did nothing.

A new investigation by a working group established by Ohio Gov. Mike DeWine found that the state medical board investigated allegations of sexual misconduct against Strauss in 1996.

The board found credible evidence of sexual misconduct by Strauss and revealed that Strauss had been "performing inappropriate genital exams on male students for years," but no one with knowledge of the case worked to remove his medical license or notify law enforcement, DeWine announced at a press conference Friday.

The investigation revealed that an attorney with the medical board did intend to proceed with a case against Strauss, but for some reason never followed through. That attorney, as well as others involved with the 1996 investigation, are now deceased and cannot be questioned about their conduct, DeWine said.

"We'll likely never know exactly why the case was ultimately ignored by the medical board," DeWine said Friday.

The allegations against Strauss — who died by suicide in 2005 — emerged last year after former Ohio State athletes came forward to claim the doctor had sexually abused them under the guise of a medical examination.

The info is here.

Three things digital ethics can learn from medical ethics

Carissa Véliz
Nature Electronics 2:316-318 (2019)

Here is an excerpt:

Similarly, technological decisions are not only about facts (for example, about what is more efficient), but also about the kind of life we want and the kind of society we strive to build. The beginning of the digital age has been plagued by impositions, with technology companies often including a disclaimer in their terms and conditions that “they can unilaterally change their terms of service agreement without any notice of changes to the users”. Changes towards more respect for autonomy, however, can already be seen. With the implementation of the GDPR in Europe, for instance, tech
companies are being urged to accept that people may prefer services that are less efficient or possess less functionality if that means they get to keep their privacy.

One of the ways in which technology has failed to respect autonomy is through the use of persuasive technologies. Digital technologies that are designed to chronically distract us not only jeopardize our attention, but also our will, both individually and collectively. Technologies that constantly hijack our attention threaten the resources we need to exercise our autonomy.  If one were to ask people about their goals in life, most people would likely mention things such as “spending more time with family” — not many people would suggest “spending more time on Facebook”.  Yet most people do not accomplish their goals — we get distracted.

The info is here.

Sunday, September 22, 2019

The Ethics Of Hiding Your Data From the Machines

Molly Wood
wired.com
Originally posted August 22, 2019

Here is an excerpt:

There’s also a real and reasonable fear that companies or individuals will take ethical liberties in the name of pushing hard toward a good solution, like curing a disease or saving lives. This is not an abstract problem: The co-founder of Google’s artificial intelligence lab, DeepMind, was placed on leave earlier this week after some controversial decisions—one of which involved the illegal use of over 1.5 million hospital patient records in 2017.

So sticking with the medical kick I’m on here, I propose that companies work a little harder to imagine the worst-case scenario surrounding the data they’re collecting. Study the side effects like you would a drug for restless leg syndrome or acne or hepatitis, and offer us consumers a nice, long, terrifying list of potential outcomes so we actually know what we’re getting into.

And for we consumers, well, a blanket refusal to offer up our data to the AI gods isn’t necessarily the good choice either. I don’t want to be the person who refuses to contribute my genetic data via 23andMe to a massive research study that could, and I actually believe this is possible, lead to cures and treatments for diseases like Parkinson’s and Alzheimer’s and who knows what else.

I also think I deserve a realistic assessment of the potential for harm to find its way back to me, because I didn’t think through or wasn’t told all the potential implications of that choice—like how, let’s be honest, we all felt a little stung when we realized the 23andMe research would be through a partnership with drugmaker (and reliable drug price-hiker) GlaxoSmithKline. Drug companies, like targeted ads, are easy villains—even though this partnership actually could produce a Parkinson’s drug. But do we know what GSK’s privacy policy looks like? That deal was a level of sharing we didn’t necessarily expect.

The info is here.

Saturday, September 21, 2019

The Sacklers were drug dealers who put money over morality.

‘At nearly every turn, Purdue put profit first and created more misery.’Chris McGreal
The Guardian
Originally published September 17, 2019

If only we could feel Purdue Pharma’s pain.

The directors and owners of the company that did so much to create America’s opioid epidemic are professing distress and bewilderment at the rejection of what they claim are its good faith efforts to help the victims.

Even as Purdue announced plans late Sunday night to file for bankruptcy, its top officials were making unctuous claims that their concern was to combat an epidemic that has claimed more than 400,000 lives. Anyone who stood in the way was depriving suffering Americans of the help they need, they claimed.

Members of the Sackler family who own Purdue have offered to turn over the company to a trust which would funnel future earnings to treatment and other measures to deal with the tragedy. They would also sell Mundipharma, a British-based sister company, and hand over the payment. The Sacklers even said they would give up a part of the huge profits of OxyContin, which made the family multibillionaires.

Some of the state attorneys general and cities suing Purdue have accepted the deal as the best prospect for getting anything out of the company and said the bankruptcy filing was part of the arrangement.

Other attorneys general rejected the move, claiming it was an attempt by Purdue’s owners and executives to hang on to the bulk of the profits of drug dealing and buy their way out of individual accountability. Some of those states are also suing the Sacklers directly.

The info is here.

Friday, September 20, 2019

The crossroads between ethics and technology

Arrow indicating side road in mountain landscapeTehilla Shwartz Altshuler
Techcrunch.com
Originally posted August 6, 2019

Here is an excerpt:

The first relates to ethics. If anything is clear today in the world of technology, it is the need to include ethical concerns when developing, distributing, implementing and using technology. This is all the more important because in many domains there is no regulation or legislation to provide a clear definition of what may and may not be done. There is nothing intrinsic to technology that requires that it pursue only good ends. The mission of our generation is to ensure that technology works for our benefit and that it can help realize social ideals. The goal of these new technologies should not be to replicate power structures or other evils of the past. 

Startup nation should focus on fighting crime and improving autonomous vehicles and healthcare advancements. It shouldn’t be running extremist groups on Facebook, setting up “bot farms” and fakes, selling attackware and spyware, infringing on privacy and producing deepfake videos.

The second issue is the lack of transparency. The combination of individuals and companies that have worked for, and sometimes still work with, the security establishment frequently takes place behind a thick screen of concealment. These entities often evade answering challenging questions that result from the Israeli Freedom of Information law and even recourse to the military censor — a unique Israeli institution — to avoid such inquires.


Why Moral Emotions Go Viral Online

Ana P. Gantman, William J. Brady, & Jay Van Bavel
Scientific American
Originally posted August 20, 2019

Social media is changing the character of our political conversations. As many have pointed out, our attention is a scarce resource that politicians and journalists are constantly fighting to attract, and the online world has become a primary trigger of our moral outrage. These two ideas, it turns out, are fundamentally related. According to our forthcoming paper, words that appeal to one’s sense of right and wrong are particularly effective at capturing attention, which may help explain this new political reality.

It occurred to us that the way people scroll through their social media feeds is very similar to a classic method psychologists use to measure people’s ability to pay attention. When we mindlessly browse social media, we are rapidly presenting a stream of verbal stimuli to ourselves. Psychologists have been studying this issue in the lab for decades, displaying to subjects a rapid succession of words, one after another, in the blink of an eye. In the lab, people are asked to find a target word among a collection of other words. Once they find it, there’s a short window of time in which that word captures their attention. If there’s a second target word in that window, most people don’t even see it—almost as if they had blinked with their eyes open.

There is an exception: if the second target word is emotionally significant to the viewer, that person will see it. Some words are so important to us that they are able to capture our attention even when we are already paying attention to something else.

The info is here.

Thursday, September 19, 2019

Can Physicians Work in US Immigration Detention Facilities While Upholding Their Hippocratic Oath?

Spiegel P, Kass N, Rubenstein L.
JAMA. Published online August 30, 2019.
doi:10.1001/jama.2019.12567

The modern successor to the Hippocratic oath, called the Declaration of Geneva, was updated and approved by the World Medical Association in 2017. The pledge states that “The health and well-being of my patient will be my first consideration” and “I will not use my medical knowledge to violate human rights and civil liberties, even under threat.” Can a physician work in US immigration detention facilities while upholding this pledge?

There is a humanitarian emergency at the US-Mexico border where migrants, including families, adults, or unaccompanied children, are detained and processed by the Department of Homeland Security’s (DHS) Customs and Border Patrol and are held in overcrowded and unsanitary conditions with insufficient medical care.2 Children (persons <18 years), without their parents or guardians, are often being detained in these detention facilities beyond the 72 hours allowed under federal law. Adults and children with a parent or legal guardian are then transferred from Customs and Border Patrol facilities to DHS’ Immigration and Customs Enforcement facilities, which are also overcrowded and where existing standards for conditions of confinement are often not met. Unaccompanied minors are transferred from Customs and Border Patrol detention facilities to Health and Human Services (HHS) facilities run by the Office of Refugee Resettlement (ORR). The majority of these unaccompanied children are then released to the care of community sponsors, while others stay, sometimes for months.

Children should not be detained for immigration reasons at all, according to numerous professional associations, including the American Academy of Pediatrics.3 Detention of children has been associated with increased physical and psychological illness, including posttraumatic stress disorder, as well as developmental delay and subsequent problems in school.

Given the psychological and physical harm to children who are detained, the United Nations Committee on the Rights of the Child stated that the detention of a child “cannot be justified solely on the basis of the child being unaccompanied or separated, or on their migratory or residence status, or lack thereof,” and should in any event only be used “…as a measure of last resort and for the shortest appropriate period of time.”6 The United States is the only country not to have ratified the convention on the Rights of the Child, but the international standard is so widely recognized that it should still apply. Children held in immigration detention should be released into settings where they are safe, protected, and can thrive.

The info is here.

Do Ethics Really Matter To Today's Consumers?

Anna-Mieke Anderson
Forbes.com
Originally posted August 20, 2019

Unlike any other time in history, consumers are truly demanding more from the companies with which they do business. Today’s shoppers are looking for ethical, eco-friendly brands that put people and the planet ahead of profits.  Led by the estimated 83 million millennials in the world, this change shows the need for companies to lead with compassion and authenticity. The spending power of millennials can’t be overlooked. They are projected to spend $1.4 trillion annually by 2020.

Undoubtedly, technology is a major contributing factor to this shift in purchasing. Consumers have endless information about a company’s practices, mission and values at their fingertips. They are also attuned to what’s happening in the world around them and want to help address the pressing issues they are facing while not contributing further to the problems they inherited. Consider this: 81% of millennials want a company to make public commitments to charitable causes and global citizenship, something many corporations are not used to doing.

According to the 2018 Conscious Consumer Spending Index, in 2018, 59% of people bought goods or services from a company they considered socially responsible, and 32% of Americans plan to spend even more this year with companies that align with their social values. What’s equally important to note is that in the same timeframe, 32% of Americans refused to support a company that they felt was not socially responsible.

The info is here.

Wednesday, September 18, 2019

California Requires Suicide Prevention Phone Number On Student IDs

Mark Kreider
Kaiser Health News
Originally posted August 30, 2019

Here is an excerpt:

A California law that has greeted students returning to school statewide over the past few weeks bears a striking resemblance to that Palo Alto policy from four years ago. Beginning with the 2019-20 school year, all IDs for California students in grades seven through 12, and in college, must bear the telephone number of the National Suicide Prevention Lifeline. That number is 800-273-TALK (8255).

“I am extremely proud that this strategy has gone statewide,” said Herrmann, who is now superintendent of the Roseville Joint Union High School District near Sacramento.

The new student ID law marks a statewide response to what educators, administrators and students themselves know is a growing need.

The numbers support that idea — and they are as jarring as they are clarifying.

Suicide was the second-leading cause of death in the United States among people ages 10 to 24 in 2017, according to the U.S. Centers for Disease Control and Prevention.  The suicide rate among teenagers has risen dramatically over the past two decades, according to data from the CDC.

The info is here.

Reasons or Rationalisations: The Role of Principles in the Moral Dumbfounding Paradigm

Cillian McHugh, Marek McGann, Eric Igou, & Elaine L. Kinsella 
PsyArXiv
Last edited August 15, 2019

Abstract

Moral dumbfounding occurs when people maintain a moral judgment even though they cannot provide reasons for it. Recently, questions have been raised about whether dumbfounding is a real phenomenon. Two reasons have been proposed as guiding the judgments of dumbfounded participants: harm-based reasons (believing an action may cause harm) or norm-based reasons (breaking a moral norm is inherently wrong). Participants who endorsed either reason were excluded from analysis, and instances of moral dumbfounding seemingly reduced to non-significance. We argue that endorsing a reason is not sufficient evidence that a judgment is grounded in that reason. Stronger evidence should additionally account for (a) articulating a given reason, and (b) consistently applying the reason in different situations. Building on this, we develop revised exclusion criteria across 2 studies. Study 1 included an open-ended response option immediately after the presentation of a moral scenario. Responses were coded for mention of harm-based or norm-based reasons. Participants were excluded from analysis if they both articulated and endorsed a given reason. Using these revised criteria for exclusion, we found evidence for dumbfounding, as measured by the selecting of an admission of not having reasons. Study 2 included a further three questions assessing the consistency with which people apply harm-based reasons. As predicted, few participants consistently applied, articulated, and endorsed harm-based reasons, and evidence for dumbfounding was found.

The research is here.

Tuesday, September 17, 2019

Aiming For Moral Mediocrity

Eric Schwitzgebel
Res Philosophica, Vol 96 (3), July 2019.
DOI: 10.11612/resphil.1806

Abstract

Most people aim to be about as morally good as their peers—not especially better, not especially worse. We do not aim to be good, or non-bad, or to act permissibly rather than impermissibly, by fixed moral standards. Rather, we notice the typical behavior of our peers, then calibrate toward so-so. This is a somewhat bad way to be, but it’s not a terribly bad way to be. We are somewhat morally criticizable for having low moral ambitions. Typical arguments defending the moral acceptability of low moral ambitions—the So-What-If-I’m-Not-a-Saint Excuse, the Fairness Objection, the Happy Coincidence Defense, and the claim that you’re already in The-Most-You-Can-Do Sweet Spot—do not survive critical scrutiny.

Conclusion

Most of us do not aim to be morally good by absolute standards. Instead we aim to be about as morally good as our peers. Our peers are somewhat morally criticizable—not morally horrible, but morally mediocre. If we aim to approximately match their mediocrity, we are somewhat morally
criticizable for having such low personal moral ambitions. It’s tempting to try to rationalize one’s mediocrity away by admitting merely that one is not a saint, or by appealing to the Fairness Objection or the Happy Coincidence Defense, or by flattering oneself that one is already in TheMost-You-Can-Do Sweet Spot—but these self-serving excuses don’t survive scrutiny.

Consider where you truly aim. Maybe moral goodness isn’t so important to you, as long as you’re not among the worst. If so, own your mediocrity.  Accept the moral criticism you deserve for your low moral ambitions, or change them.

When do we punish people who don’t?

Martin, J., Jordan, J., Rand, D., & Cushman, F.
(2019). Cognition, 193(August)
doi.org/10.1016/j.cognition.2019.104040

Abstract

People often punish norm violations. In what cases is such punishment viewed as normative—a behavior that we “should” or even “must” engage in? We approach this question by asking when people who fail to punish a norm violator are, themselves, punished. (For instance, a boss who fails to punish transgressive employees might, herself, be fired). We conducted experiments exploring the contexts in which higher-order punishment occurs, using both incentivized economic games and hypothetical vignettes describing everyday situations. We presented participants with cases in which an individual fails to punish a transgressor, either as a victim (second-party) or as an observer (third-party). Across studies, we consistently observed higher-order punishment of non-punishing observers. Higher-order punishment of non-punishing victims, however, was consistently weaker, and sometimes non-existent. These results demonstrate the selective application of higher-order punishment, provide a new perspective on the psychological mechanisms that support it, and provide some clues regarding its function.

The research can be found here.

Monday, September 16, 2019

Sex misconduct claims up 62% against California doctors

Vandana Ravikumar
USAToday.com
Originally posted August 12, 2019

The number of complaints against California physicians for sexual misconduct has risen by 62% since the fall of 2017, according to a Los Angeles Times investigation.

The investigation, published Monday, found that the rise in complaints coincides with the beginning of the #MeToo movement, which encouraged victims of sexual misconduct or assault to speak out about their experiences. Though complaints of sexual misconduct against physicians are small in number, they are among the fastest growing types of allegations.

Recent high-profile incidents of sexual misconduct involving medical professionals were also a catalyst, the Times reported. Those cases include the abuses of Larry Nassar, a former USA Gymnastics doctor who was sentenced in 2018 for 40-175 years in prison for molesting hundreds of young athletes.

That same year, hundreds of women accused former University of Southern California gynecologist George Tyndall of inappropriate behavior. Tyndall, who worked at the university for nearly three decades, was recently charged for sexually assaulting 16 women.

The info is here.

Increasing altruistic and cooperative behaviour with simple moral nudges

Valerio Capraro, Glorianna Jagfeld,
Rana Klein, Mathijs Mul & Iris van de Pol
Natrure.com
Published Online August 15, 2019

The conflict between pro-self and pro-social behaviour is at the core of many key problems of our time, as, for example, the reduction of air pollution and the redistribution of scarce resources. For the well-being of our societies, it is thus crucial to find mechanisms to promote pro-social choices over egoistic ones. Particularly important, because cheap and easy to implement, are those mechanisms that can change people’s behaviour without forbidding any options or significantly changing their economic incentives, the so-called “nudges”. Previous research has found that moral nudges (e.g., making norms salient) can promote pro-social behaviour. However, little is known about whether their effect persists over time and spills across context. This question is key in light of research showing that pro-social actions are often followed by selfish actions, thus suggesting that some moral manipulations may backfire. Here we present a class of simple moral nudges that have a great positive impact on pro-sociality. In Studies 1–4 (total N =  1,400), we use economic games to demonstrate that asking subjects to self-report “what they think is the morally right thing to do” does not only increase pro-sociality in the choice immediately after, but also in subsequent choices, and even when the social context changes. In Study 5, we explore whether moral nudges promote charity donations to humanitarian organisations in a large (N =  1,800) crowdfunding campaign. We find that, in this context, moral nudges increase donations by about 44 percent.

The research is here.

Sunday, September 15, 2019

To Study the Brain, a Doctor Puts Himself Under the Knife

Adam Piore
MIT Technology Review
Originally published November 9, 2015

Here are two excerpts:

Kennedy became convinced that the way to take his research to the next level was to find a volunteer who could still speak. For almost a year he searched for a volunteer with ALS who still retained some vocal abilities, hoping to take the patient offshore for surgery. “I couldn’t get one. So after much thinking and pondering I decided to do it on myself,” he says. “I tried to talk myself out of it for years.”

The surgery took place in June 2014 at a 13-bed Belize City hospital a thousand miles south of his Georgia-based neurology practice and also far from the reach of the FDA. Prior to boarding his flight, Kennedy did all he could to prepare. At his small company, Neural Signals, he fabricated the electrodes the neurosurgeon would implant into his motor cortex—even chose the spot where he wanted them buried. He put aside enough money to support himself for a few months if the surgery went wrong. He had made sure his living will was in order and that his older son knew where he was.

(cut)

To some researchers, Kennedy’s decisions could be seen as unwise, even unethical. Yet there are cases where self-experiments have paid off. In 1984, an Australian doctor named Barry Marshall drank a beaker filled with bacteria in order to prove they caused stomach ulcers. He later won the Nobel Prize. “There’s been a long tradition of medical scientists experimenting on themselves, sometimes with good results and sometimes without such good results,” says Jonathan Wolpaw, a brain-computer interface researcher at the Wadsworth Center in New York. “It’s in that tradition. That’s probably all I should say without more information.”

The info is here.


Saturday, September 14, 2019

Do People Want to Be More Moral?

Jessie Sun and Geoffrey Goodwin
PsyArXiv Preprints
Originally posted August 26, 2019

Abstract

Most people want to change some aspects of their personality, but does this phenomenon extend to moral character, and to close others? Targets (N = 800) and well-acquainted informants (N = 958) rated targets’ personality traits and reported how much they wanted the target to change each trait. Targets and informants reported a lower desire to change more morally-relevant traits (e.g., honesty, compassion), compared to less morally-relevant traits (e.g., anxiety, sociability). Moreover, although targets and informants generally wanted targets to improve more on traits that targets had less desirable levels of, targets’ moral change goals were less calibrated to their current levels. Finally, informants wanted targets to change in similar ways, but to a lesser extent, than targets themselves did. These findings shed light on self–other similarities and asymmetries in personality change goals, and suggest that the general desire for self-improvement may be less prevalent in the moral domain.

From the Discussion:

Why don’t people particularly want to be more moral? One possibility is that people see less room for improvement on moral traits, especially given the relatively high ratings on these traits.  Our data cannot speak directly to this possibility, because people might not be claiming that they have the lowest or highest possible levels of each trait when they “strongly disagree” or “strongly agree” with each trait description (Blanton & Jaccard, 2006). Testing this idea would therefore require a more direct measure of where people think they stand, relative to these extremes.

A related possibility is that people are less motivated to improve moral traits because they already see themselves as being quite high on such traits, and therefore morally “good enough”—even if they think they could be morally better (see Schwitzgebel, 2019). Consistent with this idea, supplemental analyses showed that people are less inclined to change the traits that they rate themselves higher on, compared to traits that they rate themselves lower on. However, even controlling for current levels, people are still less inclined to change more morally-relevant traits(see Supplemental Materialfor these within-person analyses), suggesting that additional psychological factors might reducepeople’s desire to change morally-relevant traits.One additional possibility is that people are more motivated to change in ways that will improve their own well-being(Hudson & Fraley, 2016). Whereas becoming less anxious has obvious personal benefits, people might believe that becoming more moral would result in few personal benefits (or even costs).

The research is here.

Friday, September 13, 2019

Intention matters to make you (im)moral: Positive-negative asymmetry in moral character evaluations

Paula Yumi Hirozawa, M. Karasawa & A. Matsuo
(2019) The Journal of Social Psychology
DOI: 10.1080/00224545.2019.1653254

Abstract

Is intention, even if unfulfilled, enough to make a person appear to be good or bad? In this study, we investigated the influence of unfulfilled intentions of an agent on subsequent moral character evaluations. We found a positive-negative asymmetry in the effect of intentions. Factual information concerning failure to fulfill a positive intention mitigated the morality judgment of the actor, yet this mitigation was not as evident for the negative vignettes. Participants rated an actor who failed to fulfill their negative intention as highly immoral, as long as there was an external explanation to its unfulfillment. Furthermore, both emotional and cognitive (i.e., informativeness) processes mediated the effect of negative intention on moral character. For the positive intention, there was a significant mediation by emotions, yet not by informativeness. Results evidence the relevance of mental states in moral character evaluations and offer affective and cognitive explanations to the asymmetry.

Conclusion

In this study, we investigated whether intentions by themselves are enough to make an agent appear to be good or bad. The answer is yes, but with a detail. We found negative intentions are more indicative of an immoral character than positive intentions are diagnostic of moral character. Simply intending to offer cookies should not, after all, make a neighbor particularly virtuous, unless the intention is acted out. The positive-negative asymmetry demonstrated in the present study may capture a fundamental aspect of people’s moral judgments, particularly for disposition-based evaluations.

The dynamics of social support among suicide attempters: A smartphone-based daily diary study

Coppersmith, D.D.L.; Kleiman, E.M.; Glenn, C.R.; Millner, A.J.; Nock, M.K.
Behaviour Research and Therapy (2018)

Abstract

Decades of research suggest that social support is an important factor in predicting suicide risk and resilience. However, no studies have examined dynamic fluctuations in day-by-day levels of perceived social support. We examined such fluctuations over 28 days among a sample of 53 adults who attempted suicide in the past year (992 total observations). Variability in social support was analyzed with between-person intraclass correlations and root mean square of successive differences. Multi-level models were conducted to determine the association between social support and suicidal ideation. Results revealed that social support varies considerably from day to day with 45% of social support ratings differing by at least one standard deviation from the prior assessment. Social support is inversely associated with same-day and next-day suicidal ideation, but not with next-day suicidal ideation after adjusting for same-day suicidal ideation (i.e., not with daily changes in suicidal ideation). These results suggest that social support is a time-varying protective factor for suicidal ideation.

The research is here.

Thursday, September 12, 2019

Americans Have Shifted Dramatically on What Values Matter Most

Chad Day
The Wall Street Journal
Originally published August 25, 2019

The values that Americans say define the national character are changing, as younger generations rate patriotism, religion and having children as less important to them than did young people two decades ago, a new Wall Street Journal/NBC News survey finds.

The poll is the latest sign of difficulties the 2020 presidential candidates will likely face in crafting a unifying message for a country divided over personal principles and views of an increasingly diverse society.

When the Journal/NBC News survey asked Americans 21 years ago to say which values were most important to them, strong majorities picked the principles of hard work, patriotism, commitment to religion and the goal of having children.

Today, hard work remains atop the list, but the shares of Americans listing the other three values have fallen substantially, driven by changing priorities of people under age 50.

Some 61% in the new survey cited patriotism as very important to them, down 9 percentage points from 1998, while 50% cited religion, down 12 points. Some 43% placed a high value on having children, down 16 points from 1998.

Views varied sharply by age. Among people 55 and older, for example, nearly 80% said patriotism was very important, compared with 42% of those ages 18-38—the millennial generation and older members of Gen-Z.

Two-thirds of the older group cited religion as very important, compared with fewer than one-third of the younger group.

“There’s an emerging America where issues like children, religion and patriotism are far less important. And in America, it’s the emerging generation that calls the shots about where the country is headed,” said Republican pollster Bill McInturff, who conducted the survey with Democratic pollster Jeff Horwitt.

The info is here.

Morals Ex Machina: Should We Listen To Machines For Moral Guidance?

Michael Klenk
3QuarksDaily.com
Originally posted August 12, 2019

Here are two excerpts:

The prospects of artificial moral advisors depend on two core questions: Should we take ethical advice from anyone anyway? And, if so, are machines any good at morality (or, at least, better than us, so that it makes sense that we listen to them)? I will only briefly be concerned with the first question and then turn to the second question at length. We will see that we have to overcome several technical and practical barriers before we can reasonably take artificial moral advice.

(cut)

The limitation of ethically aligned artificial advisors raises an urgent practical problem, too. From a practical perspective, decisions about values and their operationalisation are taken by the machine’s designers. Taking their advice means buying into preconfigured ethical settings. These settings might not agree with you, and they might be opaque so that you have no way of finding out how specific values have been operationalised. This would require accepting the preconfigured values on blind trust. The problem already exists in machines that give non-moral advice, such as mapping services. For example, when you ask your phone for the way to the closest train station, the device will have to rely on various assumptions about what path you can permissibly take and it may also consider commercial interests of the service provider. However, we should want the correct moral answer, not what the designers of such technologies take that to be.

We might overcome these practical limitations by letting users input their own values and decide about their operationalisation themselves. For example, the device might ask users a series of questions to determine their ethical views and also require them to operationalise each ethical preference precisely. A vegetarian might, for instance, have to decide whether she understands ‘vegetarianism’ to encompass ‘meat-free meals’ or ‘meat-free restaurants.’ Doing so would give us personalised moral advisors that could help us live more consistently by our own ethical rules.

However, it would then be unclear how specifying our individual values, and their operationalisation improves our moral decision making instead of merely helping individuals to satisfy their preferences more consistently.

The info is here.

Wednesday, September 11, 2019

Assessment of Patient Nondisclosures to Clinicians of Experiencing Imminent Threats

Levy AG, Scherer AM, Zikmund-Fisher BJ, Larkin K, Barnes GD, Fagerlin A.
JAMA Netw Open. Published online August 14, 20192(8):e199277.
doi:10.1001/jamanetworkopen.2019.9277

Question 

How common is it for patients to withhold information from clinicians about imminent threats that they face (depression, suicidality, abuse, or sexual assault), and what are common reasons for nondisclosure?

Findings 

This survey study, incorporating 2 national, nonprobability, online surveys of a total of 4,510 US adults, found that at least one-quarter of participants who experienced each imminent threat reported withholding this information from their clinician. The most commonly endorsed reasons for nondisclosure included potential embarrassment, being judged, or difficult follow-up behavior.

Meaning

These findings suggest that concerns about potential negative repercussions may lead many patients who experience imminent threats to avoid disclosing this information to their clinician.

Conclusion

This study reveals an important concern about clinician-patient communication: if patients commonly withhold information from clinicians about significant threats that they face, then clinicians are unable to identify and attempt to mitigate these threats. Thus, these results highlight the continued need to develop effective interventions that improve the trust and communication between patients and their clinicians, particularly for sensitive, potentially life-threatening topics.

How The Software Industry Must Marry Ethics With Artificial Intelligence

Christian Pedersen
Forbes.com
Originally posted July 15, 2019

Here is an excerpt:

Companies developing software used to automate business decisions and processes, military operations or other serious work need to address explainability and human control over AI as they weave it into their products. Some have started to do this.

As AI is introduced into existing software environments, those application environments can help. Many will have established preventive and detective controls and role-based security. They can track who made what changes to processes or to the data that feeds through those processes. Some of these same pathways can be used to document changes made to goals, priorities or data given to AI.

But software vendors have a greater opportunity. They can develop products that prevent bad use of AI, but they can also use AI to actively protect and aid people, business and society. AI can be configured to solve for anything from overall equipment effectiveness or inventory reorder point to yield on capital. Why not have it solve for nonfinancial, corporate social responsibility metrics like your environmental footprint or your environmental or economic impact? Even a common management practice like using a balanced scorecard could help AI strive toward broader business goals that consider the well-being of customers, employees, customers, suppliers and other stakeholders.

The info is here.

Tuesday, September 10, 2019

Physicians Talking With Their Partners About Patients

Morris NP, & Eshel N.
JAMA. Published online August 16, 2019.
doi:10.1001/jama.2019.12293

Maintaining patient privacy is a fundamental responsibility for physicians. However, physicians often share their lives with partners or spouses. A 2018 survey of 15 069 physicians found that 85% were currently married or living with a partner, and when physicians come home from work, their partners might reasonably ask about their day. Physicians are supposed to keep patient information private in almost all circumstances, but are these realistic expectations for physicians and their partners? Might this expectation preclude potential benefits of these conversations?

In many cases, physician disclosure of clinical information to partners may violate patients’ trust. Patient privacy is so integral to the physician role that the Hippocratic oath notes, “And whatsoever I shall see or hear in the course of my profession...if it be what should not be published abroad, I will never divulge, holding such things to be holy secrets.” Whether over routine health care matters, such as blood pressure measurements; or potentially sensitive topics, such as end-of-life decisions, concerns of abuse, or substance use, patients expect their interactions with physicians to be kept in the strictest confidence. No hospital or clinic provides patients with the disclaimer, “Your private health information may be shared over the dinner table.” If a patient learned that his physician shared information about his medical encounters without permission, the patient may be far less likely to trust the physician or participate in ongoing care.

Physicians who share details with their partners about patients may not anticipate the effects of doing so. For instance, a physician’s partner could recognize the patient being discussed, whether from social connections or media coverage. After sharing patient information, physicians lose control of this information, and their partners, who may have less training about medical privacy, could unintentionally reveal sensitive patient information during future conversations.

The info is here.

Can Ethics Be Taught?

Peter Singer
Project Syndicate
Originally published August 7, 2019

Can taking a philosophy class – more specifically, a class in practical ethics – lead students to act more ethically?

Teachers of practical ethics have an obvious interest in the answer to that question. The answer should also matter to students thinking of taking a course in practical ethics. But the question also has broader philosophical significance, because the answer could shed light on the ancient and fundamental question of the role that reason plays in forming our ethical judgments and determining what we do.

Plato, in the Phaedrus, uses the metaphor of a chariot pulled by two horses; one represents rational and moral impulses, the other irrational passions or desires. The role of the charioteer is to make the horses work together as a team. Plato thinks that the soul should be a composite of our passions and our reason, but he also makes it clear that harmony is to be found under the supremacy of reason.

In the eighteenth century, David Hume argued that this picture of a struggle between reason and the passions is misleading. Reason on its own, he thought, cannot influence the will. Reason is, he famously wrote, “the slave of the passions.”

The info is here.