Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, philosophy and health care

Monday, December 10, 2018

What makes a ‘good’ clinical ethicist?

Trevor Bibler
Baylor College of Medicine Blog
Originally posted October 12, 2018

Here is an excerpt:

Some hold that the complexity of clinical ethics consultations couldn’t be reduced to multiple-choice questions based on a few sources, arguing that creating multiple-choice questions that reflect the challenges of doing clinical ethics is nearly impossible. Most of the time, the HEC-C Program is careful to emphasize that they are testing knowledge of issues in clinical ethics, not the ethicist’s ability to apply this knowledge to the practice of clinical ethics.

This is a nuanced distinction that may be lost on those outside the field. For example, an administrator might view the HEC-C Program as separating a good ethicist from an inadequate ethicist simply because they have 400 hours of experience and can pass a multiple-choice exam.

Others disagree with the source material (called “core references”) that serves as the basis for exam questions. I believe the core references, if repetitious, are important works in the field. My concern is that these works do not pay sufficient attention to some of the most pressing and challenging issues in clinical ethics today: income inequality, care for non-citizens, drug abuse, race, religion, sex and gender, to name a few areas.

Also, it’s feasible that inadequate ethicists will become certified. I can imagine an ethicist might meet the requirements, but fall short of being a good ethicist because in practice they are poor communicators, lack empathy, are authoritarian when analyzing ethics issues, or have an off-putting presence.

On the other hand, I know some ethicists I would consider experts in the field who are not going to undergo the certification process because they disagree with it. Both of these scenarios show that HEC certification should not be the single requirement that separates a good ethicist from an inadequate ethicist.

The info is here.

Somers Point therapist charged with hiring hitman to 'permanently disfigure' victim

Lauren Carroll
The Press of Atlantic City
Originally posted November 6, 2018

A Somers Point therapist told an undercover FBI agent posing as a hitman she wanted her Massachusetts colleague’s “face bashed-in” and arm broken, according to a criminal complaint filed with the U.S Attorney’s Office.

Diane Sylvia, 58, has been charged with solicitation to commit a crime of violence and appeared in Camden federal court Monday.

According to the criminal complaint filed Friday, a person contacted the FBI to report a murder-for-hire scheme on Sept. 24.

The informant is a former member of an organization criminal gang and was in therapy with Sylvia, a licensed clinical social worker. Sylvia allegedly asked the informant to help kill a North Attleboro, Massachusetts, man, the complaint said.

Sylvia’s lawyer Michael Paulhus of Toms River could not be reached for comment. Sylvia could not be reached for comment.

According to the court documents, Sylvia targeted the man after he threatened to report her to a licensing board. She wanted the man assaulted to “make (her) feel better,” according to court documents.

The info is here.

Sunday, December 9, 2018

The Vulnerable World Hypothesis

Nick Bostrom
Working Paper (2018)

Abstract

Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the “semi-anarchic default condition”. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology. A general ability to stabilize a vulnerable world would require greatly amplified capacities for preventive policing and global governance. The vulnerable world hypothesis thus offers a new perspective from which to evaluate the risk-benefit balance of developments towards ubiquitous surveillance or a unipolar world order.

The working paper is here.

Vulnerable World Hypothesis: If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization
sufficiently exits the semi-anarchic default condition.

Saturday, December 8, 2018

Psychological health profiles of Canadian psychotherapists: A wake up call on psychotherapists’ mental health

Laverdière, O., Kealy, D., Ogrodniczuk, J. S., & Morin, A. J. S.
(2018) Canadian Psychology/Psychologie canadienne, 59(4), 315-322.
http://dx.doi.org/10.1037/cap0000159

Abstract

The mental health of psychotherapists represents a key determinant of their ability to deliver optimal psychological services. However, this important topic is seldom the focus of empirical investigations. The objectives of the current study were twofold. First, the study aimed to assess subjective ratings of mental health in a broad sample of Canadian psychotherapists. Second, this study aimed to identify profiles of psychotherapists according to their scores on a series of mental health indicators. A total of 240 psychotherapists participated in the survey. Results indicated that 20% of psychotherapists were emotionally exhausted and 10% were in a state of significant psychological distress. Latent profile analyses revealed 4 profiles of psychotherapists that differed on their level of mental health: highly symptomatic (12%), at risk (35%), well adapted (40%), and high functioning (12%). Characteristics of the profiles are discussed, as well as potential implications of our findings for practice, trainee selection, and future research on psychotherapists’ mental health.

Here is part of the Discussion:

Considering that 12% of the psychotherapists were highly symptomatic and that an additional 35% could be considered at risk for significant mental health problems, the present findings raise troubling questions. Were these psychotherapists adequately prepared to help clients? From the perspective of attachment theory, the psychotherapist functions as an attachment figure for the client (Mallinckrodt, 2010); clients require their psychotherapists to provide a secure attachment base that allows for the exploration of negative thoughts and feelings, as well as for the alleviation of distress (Slade, 2016). A psychotherapist who is preoccupied with his or her own personal distress may find it very difficult to play this role efficiently and may at least implicitly bring some maladaptive features to the clinical encounter, thus depriving the client of the possibility of experiencing a secure attachment in the context of the therapeutic relationship. Moreover, regardless of the potential attachment implications, clients prefer experiencing a secure relationship with an emotionally responsive psychotherapist (Swift & Callahan, 2010). More precisely, Swift and Callahan (2010) found that clients were, to some extent, willing to forego empirically supported interventions in favour of a satisfactory relationship with the therapist, empathy from the therapist, and greater level of therapist experience. The present results cast a reasonable doubt on the ability of extenuated psychotherapists, and more so psychologically ill therapists, to present themselves in a positive light to the client in order to build strong therapeutic relationships with them.

Friday, December 7, 2018

Lay beliefs about the controllability of everyday mental states.

Cusimano, C., & Goodwin, G.
In press, Journal of Experimental Psychology: General

Abstract

Prominent accounts of folk theory of mind posit that people judge others’ mental states to be uncontrollable, unintentional, or otherwise involuntary. Yet, this claim has little empirical support: few studies have investigated lay judgments about mental state control, and those that have done so yield conflicting conclusions. We address this shortcoming across six studies, which show that, in fact, lay people attribute to others a high degree of intentional control over their mental states, including their emotions, desires, beliefs, and evaluative attitudes. For prototypical mental states, people’s judgments of control systematically varied by mental state category (e.g., emotions were seen as less controllable than desires, which in turn were seen as less controllable than beliefs and evaluative attitudes). However, these differences were attenuated, sometimes completely, when the content of and context for each mental state were tightly controlled. Finally, judgments of control over mental states correlated positively with judgments of responsibility and blame for them, and to a lesser extent, with judgments that the mental state reveals the agent’s character. These findings replicated across multiple populations and methods, and generalized to people’s real-world experiences. The present results challenge the view that people judge others’ mental states as passive, involuntary, or unintentional, and suggest that mental state control judgments play a key role in other important areas of social judgment and decision making.

The research is here.

Important research for those practicing psychotherapy.

Neuroexistentialism: A New Search for Meaning

Owen Flanagan and Gregg D. Caruso
The Philosopher's Magazine
Originally published November 6, 2018

Existentialisms are responses to recognisable diminishments in the self-image of persons caused by social or political rearrangements or ruptures, and they typically involve two steps: (a) admission of the anxiety and an analysis of its causes, and (b) some sort of attempt to regain a positive, less anguished, more hopeful image of persons. With regard to the first step, existentialisms typically involve a philosophical expression of the anxiety that there are no deep, satisfying answers that make sense of the human predicament and explain what makes human life meaningful, and thus that there are no secure foundations for meaning, morals, and purpose. There are three kinds of existentialisms that respond to three different kinds of grounding projects – grounding in God’s nature, in a shared vision of the collective good, or in science. The first-wave existentialism of Kierkegaard, Dostoevsky, and Nietzsche expressed anxiety about the idea that meaning and morals are made secure because of God’s omniscience and good will. The second-wave existentialism of Sartre, Camus, and de Beauvoir was a post-Holocaust response to the idea that some uplifting secular vision of the common good might serve as a foundation. Today, there is a third-wave existentialism, neuroexistentialism, which expresses the anxiety that, even as science yields the truth about human nature, it also disenchants.

Unlike the previous two waves of existentialism, neuroexistentialism is not caused by a problem with ecclesiastical authority, nor by the shock of coming face to face with the moral horror of nation state actors and their citizens. Rather, neuroexistentialism is caused by the rise of the scientific authority of the human sciences and a resultant clash between the scientific and humanistic image of persons. Neuroexistentialism is a twenty-first-century anxiety over the way contemporary neuroscience helps secure in a particularly vivid way the message of Darwin from 150 years ago: that humans are animals – not half animal, not some percentage animal, not just above the animals, but 100 percent animal. Everyday and in every way, neuroscience removes the last vestiges of an immaterial soul or self. It has no need for such posits. It also suggest that the mind is the brain and all mental processes just are (or are realised in) neural processes, that introspection is a poor instrument for revealing how the mind works, that there is no ghost in the machine or Cartesian theatre where consciousness comes together, that death is the end since when the brain ceases to function so too does consciousness, and that our sense of self may in part be an illusion.

The info is here.

Thursday, December 6, 2018

Partisanship, Political Knowledge, and the Dunning‐Kruger Effect

Ian G. Anson
Political Psychology
First published: 02 April 2018
https://doi.org/10.1111/pops.12490

Abstract

A widely cited finding in social psychology holds that individuals with low levels of competence will judge themselves to be higher achieving than they really are. In the present study, I examine how the so‐called “Dunning‐Kruger effect” conditions citizens' perceptions of political knowledgeability. While low performers on a political knowledge task are expected to engage in overconfident self‐placement and self‐assessment when reflecting on their performance, I also expect the increased salience of partisan identities to exacerbate this phenomenon due to the effects of directional motivated reasoning. Survey experimental results confirm the Dunning‐Kruger effect in the realm of political knowledge. They also show that individuals with moderately low political expertise rate themselves as increasingly politically knowledgeable when partisan identities are made salient. This below‐average group is also likely to rely on partisan source cues to evaluate the political knowledge of peers. In a concluding section, I comment on the meaning of these findings for contemporary debates about rational ignorance, motivated reasoning, and political polarization.

Survey Finds Widespread 'Moral Distress' Among Veterinarians

Carey Goldberg
NPR.org
Originally posted October 17, 2018

In some ways, it can be harder to be a doctor of animals than a doctor of humans.

"We are in the really unenviable, and really difficult, position of caring for patients maybe for their entire lives, developing our own relationships with those animals — and then being asked to kill them," says Dr. Lisa Moses, a veterinarian at the Massachusetts Society for the Prevention of Cruelty to Animals-Angell Animal Medical Center and a bioethicist at Harvard Medical School.

She's the lead author of a study published Monday in the Journal of Veterinary Internal Medicine about "moral distress" among veterinarians. The survey of more than 800 vets found that most feel ethical qualms — at least sometimes — about what pet owners ask them to do. And that takes a toll on their mental health.

Dr. Virginia Sinnott-Stutzman is all too familiar with the results. As a senior staff veterinarian in emergency and critical care at Angell, she sees a lot of very sick animals — and quite a few decisions by owners that trouble her.

Sometimes, owners elect to have their pets put to sleep because they can't or won't pay for treatment, she says. Or the opposite, "where we know in our heart of hearts that there is no hope to save the animal, or that the animal is suffering and the owners have a set of beliefs that make them want to keep going."

The info is here.

Wednesday, December 5, 2018

Toward a psychology of Homo sapiens: Making psychological science more representative of the human population

Mostafa Salari Rad, Alison Jane Martingano, and Jeremy Ginges
PNAS November 6, 2018 115 (45) 11401-11405; published ahead of print November 6, 2018 https://doi.org/10.1073/pnas.1721165115

Abstract

Two primary goals of psychological science should be to understand what aspects of human psychology are universal and the way that context and culture produce variability. This requires that we take into account the importance of culture and context in the way that we write our papers and in the types of populations that we sample. However, most research published in our leading journals has relied on sampling WEIRD (Western, educated, industrialized, rich, and democratic) populations. One might expect that our scholarly work and editorial choices would by now reflect the knowledge that Western populations may not be representative of humans generally with respect to any given psychological phenomenon. However, as we show here, almost all research published by one of our leading journals, Psychological Science, relies on Western samples and uses these data in an unreflective way to make inferences about humans in general. To take us forward, we offer a set of concrete proposals for authors, journal editors, and reviewers that may lead to a psychological science that is more representative of the human condition.

Georgia Tech has had a ‘dramatic increase’ in ethics complaints, president says

Eric Stirgus
The Atlantic Journal-Constitution
Originally published November 6, 2018

Here is an excerpt:

The Atlanta Journal-Constitution reported in September Georgia Tech is often slow in completing ethics investigations. Georgia Tech took an average of 102 days last year to investigate a complaint, the second-longest time of any college or university in the University System of Georgia, according to a report presented in April to the state’s Board of Regents. Savannah State University had the longest average time, 135 days.

Tuesday’s meeting is the kick-off to more than a week’s worth of discussions at Tech to improve its ethics culture. University System of Georgia Chancellor Steve Wrigley ordered Georgia Tech to update him on what officials there are doing to improve after reports found problems such as a top official who was a paid board member of a German-based company that had contracts with Tech. Peterson’s next update is due Monday.

A few employees told Peterson they’re concerned that many administrators are now afraid to make decisions and asked the president what’s being done to address that. Peterson acknowledged “there’s some anxiety on campus” and asked employees to “embrace each other” as they work through what he described as an embarrassing chapter in the school’s history.

The info is here.

Tuesday, December 4, 2018

Letting tech firms frame the AI ethics debate is a mistake

Robert Hart
www.fastcompany.com
Originally posted November 2, 2018

Here is an excerpt:

Even many ethics-focused panel discussions–or manel discussions, as some call them–are pale, male, and stale. That is to say, they are made up predominantly of old, white, straight, and wealthy men. Yet these discussions are meant to be guiding lights for AI technologies that affect everyone.

A historical illustration is useful here. Consider polio, a disease that was declared global public health enemy number one after the successful eradication of smallpox decades ago. The “global” part is important. Although the movement to eradicate polio was launched by the World Health Assembly, the decision-making body of the United Nations’ World Health Organization, the eradication campaign was spearheaded primarily by groups in the U.S. and similarly wealthy countries. Promulgated with intense international pressure, the campaign distorted local health priorities in many parts of the developing world.

It’s not that the developing countries wanted their citizens to contract polio. Of course, they didn’t. It’s just that they would have rather spent the significant sums of money on more pressing local problems. In essence, one wealthy country imposed their own moral judgement on the rest of the world, with little forethought about the potential unintended consequences. The voices of a few in the West grew to dominate and overpower those elsewhere–a kind of ethical colonialism, if you will.

The info is here.

Document ‘informed refusal’ just as you would informed consent

James Scibilia
AAP News
Originally posted October 20, 2018

Here is an excerpt:

The requirements of informed refusal are the same as informed consent. Providers must explain:

  • the proposed treatment or testing;
  • the risks and benefits of refusal;
  • anticipated outcome with and without treatment; and
  • alternative therapies, if available.

Documentation of this discussion, including all four components, in the medical record is critical to mounting a successful defense from a claim that you failed to warn about the consequences of refusing care.

Since state laws vary, it is good practice to check with your malpractice carrier about preferred risk management documentation. Generally, the facts of these discussions should be included and signed by the caretaker. This conversation and documentation should not be delegated to other members of the health care team. At least one state has affirmed through a Supreme Court decision that informed consent must be obtained by the provider performing the procedure and not another team member; it is likely the concept of informed refusal would bear the same requirements.

The info is here.

Monday, December 3, 2018

Our lack of interest in data ethics will come back to haunt us

Jayson Demers
thenextweb.com
Originally posted November 4, 2018

Here is an excerpt:

Most people understand the privacy concerns that can arise with collecting and harnessing big data, but the ethical concerns run far deeper than that.

These are just a smattering of the ethical problems in big data:

  • Ownership: Who really “owns” your personal data, like what your political preferences are, or which types of products you’ve bought in the past? Is it you? Or is it public information? What about people under the age of 18? What about information you’ve tried to privatize?
  • Bias: Biases in algorithms can have potentially destructive effects. Everything from facial recognition to chatbots can be skewed to favor one demographic over another, or one set of values over another, based on the data used to power it.
  • Transparency: Are companies required to disclose how they collect and use data? Or are they free to hide some of their efforts? More importantly, who gets to decide the answer here?
  • Consent: What does it take to “consent” to having your data harvested? Is your passive use of a platform enough? What about agreeing to multi-page, complexly worded Terms and Conditions documents?

If you haven’t heard about these or haven’t thought much about them, I can’t really blame you. We aren’t bringing these questions to the forefront of the public discussion on big data, nor are big data companies going out of their way to discuss them before issuing new solutions.

“Oops, our bad”

One of the biggest problems we keep running into is what I call the “oops, our bad” effect. The idea here is that big companies and data scientists use and abuse data however they want, outside the public eye and without having an ethical discussion about their practices. If and when the public finds out that some egregious activity took place, there’s usually a short-lived public outcry, and the company issues an apology for the actions — without really changing their practices or making up for the damage.

The info is here.

Choosing victims: Human fungibility in moral decision-making

Michal Bialek, Jonathan Fugelsang, and Ori Friedman
Judgment and Decision Making, Vol 13, No. 5, pp. 451-457.

Abstract

In considering moral dilemmas, people often judge the acceptability of exchanging individuals’ interests, rights, and even lives. Here we investigate the related, but often overlooked, question of how people decide who to sacrifice in a moral dilemma. In three experiments (total N = 558), we provide evidence that these decisions often depend on the feeling that certain people are fungible and interchangeable with one another, and that one factor that leads people to be viewed this way is shared nationality. In Experiments 1 and 2, participants read vignettes in which three individuals’ lives could be saved by sacrificing another person. When the individuals were characterized by their nationalities, participants chose to save the three endangered people by sacrificing someone who shared their nationality, rather than sacrificing someone from a different nationality. Participants do not show similar preferences, though, when individuals were characterized by their age or month of birth. In Experiment 3, we replicated the effect of nationality on participant’s decisions about who to sacrifice, and also found that they did not show a comparable preference in a closely matched vignette in which lives were not exchanged. This suggests that the effect of nationality on decisions of who to sacrifice may be specific to judgments about exchanges of lives.

The research is here.

Sunday, December 2, 2018

Re-thinking Data Protection Law in the Age of Big Data and AI

Sandra Wachter and Brent Mittelstadt
Oxford Internet Institute
Originally posted October 11,2 018

Numerous applications of ‘Big Data analytics’ drawing potentially troubling inferences about individuals and groups have emerged in recent years.  Major internet platforms are behind many of the highest profile examples: Facebook may be able to infer protected attributes such as sexual orientation, race, as well as political opinions and imminent suicide attempts, while third parties have used Facebook data to decide on the eligibility for loans and infer political stances on abortion. Susceptibility to depression can similarly be inferred via usage data from Facebook and Twitter. Google has attempted to predict flu outbreaks as well as other diseases and their outcomes. Microsoft can likewise predict Parkinson’s disease and Alzheimer’s disease from search engine interactions. Other recent invasive applications include prediction of pregnancy by Target, assessment of users’ satisfaction based on mouse tracking, and China’s far reaching Social Credit Scoring system.

Inferences in the form of assumptions or predictions about future behaviour are often privacy-invasive, sometimes counterintuitive and, in any case, cannot be verified at the time of decision-making. While we are often unable to predict, understand or refute these inferences, they nonetheless impact on our private lives, identity, reputation, and self-determination.

These facts suggest that the greatest risks of Big Data analytics do not stem solely from how input data (name, age, email address) is used. Rather, it is the inferences that are drawn about us from the collected data, which determine how we, as data subjects, are being viewed and evaluated by third parties, that pose the greatest risk. It follows that protections designed to provide oversight and control over how data is collected and processed are not enough; rather, individuals require meaningful protection against not only the inputs, but the outputs of data processing.

Unfortunately, European data protection law and jurisprudence currently fails in this regard.

The info is here.

Saturday, December 1, 2018

Building trust by tearing others down: When accusing others of unethical behavior engenders trust

Jessica A. Kennedy, Maurice E. Schweitzer.
Organizational Behavior and Human Decision Processes
Volume 149, November 2018, Pages 111-128

Abstract

We demonstrate that accusations harm trust in targets, but boost trust in the accuser when the accusation signals that the accuser has high integrity. Compared to individuals who did not accuse targets of engaging in unethical behavior, accusers engendered greater trust when observers perceived the accusation to be motivated by a desire to defend moral norms, rather than by a desire to advance ulterior motives. We also found that the accuser’s moral hypocrisy, the accusation's revealed veracity, and the target’s intentions when committing the unethical act moderate the trust benefits conferred to accusers. Taken together, we find that accusations have important interpersonal consequences.

Highlights

•    Accusing others of unethical behavior can engender greater trust in an accuser.
•    Accusations can elevate trust by boosting perceptions of accusers’ integrity.
•    Accusations fail to build trust when they are perceived to reflect ulterior motives.
•    Morally hypocritical accusers and false accusations fail to build trust.
•    Accusations harm trust in the target.

The research is here.

Friday, November 30, 2018

To regulate AI we need new laws, not just a code of ethics

Paul Chadwick
The Guardian
Originally posted October 28, 2018

Here is an excerpt:

To Nemitz, “the absence of such framing for the internet economy has already led to a widespread culture of disregard of the law and put democracy in danger, the Facebook Cambridge Analytica scandal being only the latest wake-up call”.

Nemitz identifies four bases of digital power which create and then reinforce its unhealthy concentration in too few hands: lots of money, which means influence; control of “infrastructures of public discourse”; collection of personal data and profiling of people; and domination of investment in AI, most of it a “black box” not open to public scrutiny.

The key question is which of the challenges of AI “can be safely and with good conscience left to ethics” and which need law. Nemitz sees much that needs law.

In an argument both biting and sophisticated, Nemitz sketches a regulatory framework for AI that will seem to some like the GDPR on steroids.

Among several large claims, Nemitz argues that “not regulating these all pervasive and often decisive technologies by law would effectively amount to the end of democracy. Democracy cannot abdicate, and in particular not in times when it is under pressure from populists and dictatorships.”

The info is here.

The Knobe Effect From the Perspective of Normative Orders

Andrzej Waleszczyński, Michał Obidziński, & Julia Rejewska
Studia Humana Volume 7:4 (2018), pp. 9—15

Abstract:

The characteristic asymmetry in the attribution of intentionality in causing side effects, known as the Knobe effect, is considered to be a stable model of human cognition. This article looks at whether the way of thinking and analysing one scenario may affect the other and whether the mutual relationship between the ways in which both scenarios are analysed may affect the stability of the Knobe effect. The theoretical analyses and empirical studies performed are based on a distinction between moral and non-moral normativity possibly affecting the judgments passed in both scenarios. Therefore, an essential role in judgments about the intentionality of causing a side effect could be played by normative competences responsible for distinguishing between normative orders.

The research is here.

Thursday, November 29, 2018

Ethical Free Riding: When Honest People Find Dishonest Partners

Jörg Gross, Margarita Leib, Theo Offerman, & Shaul Shalvi
Psychological Science
https://doi.org/10.1177/0956797618796480

Abstract

Corruption is often the product of coordinated rule violations. Here, we investigated how such corrupt collaboration emerges and spreads when people can choose their partners versus when they cannot. Participants were assigned a partner and could increase their payoff by coordinated lying. After several interactions, they were either free to choose whether to stay with or switch their partner or forced to stay with or switch their partner. Results reveal that both dishonest and honest people exploit the freedom to choose a partner. Dishonest people seek a partner who will also lie—a “partner in crime.” Honest people, by contrast, engage in ethical free riding: They refrain from lying but also from leaving dishonest partners, taking advantage of their partners’ lies. We conclude that to curb collaborative corruption, relying on people’s honesty is insufficient. Encouraging honest individuals not to engage in ethical free riding is essential.

Conclusion
The freedom to select partners is important for the establishment of trust and cooperation. As we show here, however, it is also associated with potential moral hazards. For individuals who seek to keep the risk of collusion low, policies providing the freedom to choose one’s partners should be implemented with caution. Relying on people’s honesty may not always be sufficient because honest people may be willing to tolerate others’ rule violations if they stand to profit from them. Our results clarify yet again that people who are not willing to turn a blind eye and stand up to corruption should receive all praise.

Does AI Ethics Need to be More Inclusive?

Patrick Lin
Forbes.com
Originally posted October 29, 2018

Here is an excerpt:

Ethics is more than a survey of opinions

First, as the study’s authors allude to in their Nature paper and elsewhere, public attitudes don’t dictate what’s ethical or not.  People believe all kinds of crazy things—such as that slavery should be permitted—but that doesn’t mean those ethical beliefs are true or have any weight.  So, capturing responses of more people doesn’t necessarily help figure out what’s ethical or not.  Sometimes, more is just more, not better or even helpful.

This is the difference between descriptive ethics and normative ethics.  The former is more like sociology that simply seeks to describe what people believe, while the latter is more like philosophy that seeks reasons for why a belief may be justified (or not) and how things ought to be.

Dr. Edmond Awad, lead author of the Nature paper, cautioned, “What we are trying to show here is descriptive ethics: peoples’ preferences in ethical decisions.  But when it comes to normative ethics, which is how things should be done, that should be left to experts.”

Nonetheless, public attitudes are a necessary ingredient in practical policymaking, which should aim at the ethical but doesn’t always hit that mark.  If expert judgments in ethics diverge too much from public attitudes—asking more from a population than what they’re willing to agree to—that’s a problem for implementing the policy, and a resolution is needed.

The info is here.

Wednesday, November 28, 2018

Why good businesspeople do bad things

Joseph Holt
The Chicago Tribune
Originally posted October 30, 2018

Here is an excerpt:

Businesspeople are also more likely to engage in bad behavior if they assume that their competitors are doing so and that they will be at a competitive disadvantage if they do not.

A 2006 study showed that MBA students in the U.S. and Canada were more likely to cheat than other graduate students. One of the authors of the study, Donald McCabe, explained in an article that the cheating was a result of MBA students’ “succeed-at-all-costs mentality” and the belief that they were acting the way they believed they needed to act to succeed in the corporate world.

Casey Donnelly, Gatto’s attorney, claimed in her opening statement at the trial that “every major apparel company” engaged in the same payment practice, and that her client was simply attempting to “level the playing field.”

Federal authorities engaged in a yearslong investigation of shadowy dealings involving shoe companies, sports agents, college coaches and top high school basketball players have reportedly looked into Nike and Under Armour as well as Adidas.

Time will tell whether those companies were involved in similar payment schemes.

The info is here.

Promoting wellness and stress management in residents through emotional intelligence training

Ramzan Shahid, Jerold Stirling, William Adams
Advances in Medical Education and Practice ,Volume 9

Background: 

US physicians are experiencing burnout in alarming numbers. However, doctors with high levels of emotional intelligence (EI) may be immune to burnout, as they possess coping strategies which make them more resilient and better at managing stress. Educating physicians in EI may help prevent burnout and optimize their overall wellness. The purpose of our study was to determine if educational intervention increases the overall EI level of residents; specifically, their stress management and wellness scores.

Participant and methods: 

Residents from pediatrics and med-ped residency programs at a university-based training program volunteered to complete an online self-report EI survey (EQ-i 2.0) before and after an educational intervention. The four-hour educational workshop focused on developing four EI skills: self-awareness; self-management; social awareness; and social skills. We compared de-identified median score reports for the residents as a cohort before and after the intervention.

Results: 

Thirty-one residents (20 pediatric and 11 med-ped residents) completed the EI survey at both time intervals and were included in the analysis of results. We saw a significant increase in total EI median scores before and after educational intervention (110 vs 114, P=0.004). The stress management composite median score significantly increased (105 vs 111, P<0.001). The resident’s overall wellness score also improved significantly (104 vs 111, P=0.003).

Conclusions: 

As a group, our pediatric and med-peds residents had a significant increase in total EI and several other components of EI following an educational intervention. Teaching EI skills related to the areas of self-awareness, self-management, social awareness, and social skill may improve stress management skills, promote wellness, and prevent burnout in resident physicians.

The research is here.

Tuesday, November 27, 2018

A fate worse than death

Cathy Rentzenbrink
Prospect Magazine
Originally posted March 18, 2018

Here is an excerpt:

We have lost our way with death. Improvements in medicine have led us to believe that a long and fulfilling life is our birthright. Death is no longer seen as the natural consequence of life but as an inconvenient and unjust betrayal. We are in an age of denial.

Why does this matter? Why not allow ourselves this pleasant and surely harmless delusion? It matters because we are in a peculiar and precise period of history where our technological advances enable us to keep people alive when we probably shouldn’t. Life or death is no longer a black and white situation. There are many and various shades of grey. We behave as though death is the worst outcome, but it isn’t.

Many years after the accident, when I wrote a book about it called The Last Act of Love, I catalogued what happened to me as I witnessed the destruction of my brother. I detailed the drinking and the depression. The hardest thing was tracking our journey from hope to despair. I still find it hard to be precise about exactly when and how I realised that Matty would be better off dead. I know I moved from being convinced that if I tried hard enough I could bring Matty back to life, to thinking I should learn to love him as he was. Eventually I asked myself the right question: would Matty himself want to be alive like this? Of course, the answer was no.

The info is here.

Therapist empathy and client outcome: An updated meta-analysis

Elliott, R., Bohart, A. C., Watson, J. C., & Murphy, D. (2018).
Psychotherapy, 55(4), 399-410.

Abstract

Put simply, empathy refers to understanding what another person is experiencing or trying to express. Therapist empathy has a long history as a hypothesized key change process in psychotherapy. We begin by discussing definitional issues and presenting an integrative definition. We then review measures of therapist empathy, including the conceptual problem of separating empathy from other relationship variables. We follow this with clinical examples illustrating different forms of therapist empathy and empathic response modes. The core of our review is a meta-analysis of research on the relation between therapist empathy and client outcome. Results indicated that empathy is a moderately strong predictor of therapy outcome: mean weighted r = .28 (p < .001; 95% confidence interval [.23, .33]; equivalent of d = .58) for 82 independent samples and 6,138 clients. In general, the empathy–outcome relation held for different theoretical orientations and client presenting problems; however, there was considerable heterogeneity in the effects. Client, observer, and therapist perception measures predicted client outcome better than empathic accuracy measures. We then consider the limitations of the current data. We conclude with diversity considerations and practice recommendations, including endorsing the different forms that empathy may take in therapy.


Clinical Impact Statement—
Question: Does therapist empathy predict success in psychotherapy? 
Findings: In general, clients have moderately better outcomes in psychotherapy when clients, therapists, and observers perceive therapists as understanding them. 
Meaning: Empathy is an important element of any therapeutic relationship, and worth the investment of time and effort required to do it well and consistently. 
Next Steps: Careful research using diverse methods is needed to firmly establish and explain the causal role of therapist empathy in bringing about client outcome; clinicians can contribute by identifying situations in which empathy may be particularly valuable or conversely contraindicated.

Monday, November 26, 2018

First gene-edited babies claimed in China

Marilynn Marchione
Associated Press
Originally posted today

A Chinese researcher claims that he helped make the world’s first genetically edited babies — twin girls born this month whose DNA he said he altered with a powerful new tool capable of rewriting the very blueprint of life.

If true, it would be a profound leap of science and ethics.

A U.S. scientist said he took part in the work in China, but this kind of gene editing is banned in the United States because the DNA changes can pass to future generations and it risks harming other genes.

Many mainstream scientists think it’s too unsafe to try, and some denounced the Chinese report as human experimentation.

The researcher, He Jiankui of Shenzhen, said he altered embryos for seven couples during fertility treatments, with one pregnancy resulting thus far. He said his goal was not to cure or prevent an inherited disease, but to try to bestow a trait that few people naturally have — an ability to resist possible future infection with HIV, the AIDS virus.

The info is here.

An evaluative conservative case for biomedical enhancement

John Danaher
British Journal of Medical Ethics
Volume 42, 9 (2018)

Abstract

It is widely believed that a conservative moral outlook is opposed to biomedical forms of human enhancement. In this paper, I argue that this widespread belief is incorrect. Using Cohen's evaluative conservatism as my starting point, I argue that there are strong conservative reasons to prioritise the development of biomedical enhancements. In particular, I suggest that biomedical enhancement may be essential if we are to maintain our current evaluative equilibrium (ie, the set of values that undergird and permeate our current political, economic and personal lives) against the threats to that equilibrium posed by external, non-biomedical forms of enhancement. I defend this view against modest conservatives who insist that biomedical enhancements pose a greater risk to our current evaluative equilibrium, and against those who see no principled distinction between the forms of human enhancement.

Conclusion

In conclusion, despite the widespread belief that conservative moral principles are opposed to human enhancement, there are in fact strong reasons to think that human enhancement has conservative potential. This is because technological development does not take place in a vacuum. One cannot consider the effects of biomedical enhancement technology in isolation from other trends in technological progress. When this is done, it becomes apparent that AI, robotics and information technology are developing at a rapid pace and their widespread deployment could undermine much of our current evaluative equilibrium. Biomedical enhancement may be necessary, not merely desirable, if we are to maintain that equilibrium.

The info is here.

Sunday, November 25, 2018

Academic Ethics: Should Scholars Avoid Citing the Work of Awful People?

Brian Leiter
The Chronicle of Higher Education
Originally posted October 25, 2018

Here is an excerpt:

The issue is particularly fraught in one of my academic fields, philosophy, in which Gottlob Frege, the founder of modern logic and philosophy of language, was a disgusting anti-Semite, and Martin Heidegger, a prominent figure in 20th-century existentialism, was an actual Nazi.

What is a scholar to do?

I propose a simple answer: Insofar as you aim to contribute to scholarship in your discipline, cite work that is relevant regardless of the author’s misdeeds. Otherwise you are not doing scholarship but something else. Let me explain.

Wilhelm von Humboldt crafted the influential ideal of the modern research university in Germany some 200 years ago. In his vision, the university is a place where all, and only, Wissenschaften — "sciences" — find a home. The German Wissenschaften has no connotation of natural science, unlike its English counterpart. A Wissenschaft is any systematic form of inquiry into nature, history, literature, or society marked by rigorous methods that secure the reliability or truth of its findings.

The info is here.

Saturday, November 24, 2018

Establishing an AI code of ethics will be harder than people think

Karen Hao
www.technologyreview.com
Originally posted October 21, 2018

Over the past six years, the New York City police department has compiled a massive database containing the names and personal details of at least 17,500 individuals it believes to be involved in criminal gangs. The effort has already been criticized by civil rights activists who say it is inaccurate and racially discriminatory.

"Now imagine marrying facial recognition technology to the development of a database that theoretically presumes you’re in a gang," Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense fund, said at the AI Now Symposium in New York last Tuesday.

Lawyers, activists, and researchers emphasize the need for ethics and accountability in the design and implementation of AI systems. But this often ignores a couple of tricky questions: who gets to define those ethics, and who should enforce them?

Not only is facial recognition imperfect, studies have shown that the leading software is less accurate for dark-skinned individuals and women. By Ifill’s estimation, the police database is between 95 and 99 percent African American, Latino, and Asian American. "We are talking about creating a class of […] people who are branded with a kind of criminal tag," Ifill said.

The info is here.

Friday, November 23, 2018

The Moral Law Within: The Scientific Case For Self-Governance

Carsten Tams
Forbes.com
Originally posted September 26, 2018

Here is an excerpt:

The behavioral ethics literature, and its reception in the ethics and compliance field, is following a similar trend. Behavioral ethics is often defined as the discipline that helps to explain why good people do bad things. It frequently focuses on how various biases, cognitive heuristics, blind spots, ethical fading, bounded ethicality, or rationalizations compromise people’s ethical intentions.

To avoid misunderstandings, I am a fan and avid consumer of behavioral science literature. Understanding unethical biases is fascinating and raising awareness about them is useful. But it is only half the story. There is more to behavioral science than biases and fallacies. A lopsided focus on biases may lead us to view people’s morality as hopelessly flawed. Standing amidst a forest crowded by biases and fallacies, we may forget that people often judge and act morally.

Such an anthropological bias has programmatic consequences. If we frame organizational ethics simply as a problem of people’s ethical biases, we will focus on keeping these negative biases in check. This framing, however, does not provide a rationale for supporting people’s capacity for self-governed ethical behavior. For such a rationale, we would need evidence that such a capacity exists. The human capacity for morality has been a subject of rigorous inquiry across diverse behavioral disciplines. In the following, this article will highlight a selection of major contributions to this inquiry.

The info is here.

Thursday, November 22, 2018

The Importance of Making the Moral Case for Immigration

Ilya Somin
reason.com
Originally posted on October 23, 2018

Here is an excerpt:

The parallels between racial discrimination and hostility to immigration were in fact noted by such nineteenth century opponents of slavery as Abraham Lincoln and Frederick Douglass. These similarities suggest that moral appeals similar to those made by the antislavery and civil rights movements can also play a key role in the debate over immigration.

Moral appeals were in fact central to the two issues on which public opinion has been most supportive of immigrants in recent years: DACA and family separation. Overwhelming majorities supporting letting undocumented immigrants who were brought to America as children stay in the US, oppose the forcible separation of children from their parents at the border. In both cases, public opinion seems driven by considerations of justice and morality, not narrow self-interest (although letting DACA recipients stay would indeed benefit the US economy). Admittedly, these are relatively "easy" cases because both involve harming children for the alleged sins of their parents. But they nonetheless show the potency of moral considerations in the immigration debate. And most other immigration restrictions are only superficially different: instead of punishing children for their parents' illegal border-crossing, they victimize adults and children alike because their parents gave birth to them in the wrong place.

The key role of moral principles in struggles for liberty and equality should not be surprising. Contrary to popular belief, voters' political views on most issues are not determined by narrow self-interest. Public attitudes are instead generally driven by a combination of moral principles and perceived benefits to society as a whole. Immigration is not an exception to that tendency.

This is not to say that voters weigh the interests of all people equally. Throughout history, they have often ignored or downgraded those of groups seen as inferior, or otherwise undeserving of consideration. Slavery and segregation persisted in large part because, as Supreme Court Chief Justice Roger Taney notoriously put it, many whites believed that blacks "had no rights which the white man was bound to respect." Similarly, the subordination of women was not seriously questioned for many centuries, because most people believed that it was a natural part of life, and that men were entitled to rule over the opposite sex. In much the same way, today most people assume that natives are entitled to keep out immigrants either to preserve their culture against supposedly inferior ways or because they analogize a nation to a house or club from which the "owners" can exclude newcomers for almost any reason they want.

The info is here.

Wednesday, November 21, 2018

Trump EPA official who was indicted on ethics charges has resigned

Brady Dennis
The Washington Post
Originally posted November 19, 2018

A regional administrator for the Environmental Protection Agency, indicted in Alabama last week on violations of state ethics laws, has resigned.

Trey Glenn, who oversaw eight states in the Southeast as the EPA’s Region 4 leader, faces charges of using his office for personal gain and soliciting or receiving a “thing of value” from a principal or lobbyist, according to the Alabama Ethics Commission. He was booked at the Jefferson County Jail on Thursday in Birmingham and later released on a $30,000 bond, records show.

The charges against Glenn and a former business partner appear to stem from work helping a coal company fight liability in an EPA-mandated cleanup of a polluted site in north Birmingham. Glenn has denied wrongdoing, but he submitted his resignation over the weekend to acting EPA administrator Andrew Wheeler.

The info is here.

Editorial note: Just another example of how the swamp only deepened in the current administration.

Even The Data Ethics Initiatives Don't Want To Talk About Data Ethics

Kalev Leetaru
Forbes.com
Originally posted October 23, 2018

Two weeks ago, a new data ethics initiative, the Responsible Computer Science Challenge, caught my eye. Funded by the Omidyar Network, Mozilla, Schmidt Futures and Craig Newmark Philanthropies, the initiative will award up to $3.5M to “promising approaches to embedding ethics into undergraduate computer science education, empowering graduating engineers to drive a culture shift in the tech industry and build a healthier internet.” I was immediately excited about a well-funded initiative focused on seeding data ethics into computer science curricula, getting students talking about ethics from the earliest stages of their careers. At the same time, I was concerned about whether even such a high-profile effort could possibly reverse the tide of anti-data-ethics that has taken root in academia and what impact it could realistically have in a world in which universities, publishers, funding agencies and employers have largely distanced themselves from once-sacrosanct data ethics principles like informed consent and the right to opt out. Surprisingly, for an initiative focused on evangelizing ethics, the Challenge declined to answer any of the questions I posed it regarding how it saw its efforts as changing this. Is there any hope left for data ethics when the very initiatives designed to help teach ethics don’t want to talk about ethics?

On its surface, the Responsible Computer Science Challenge seems a tailor-built response to a public rapidly awakening to the incredible damage unaccountable platforms have wreaked upon society. The Challenge describes its focus as “supporting the conceptualization, development, and piloting of curricula that integrate ethics with undergraduate computer science training, educating a new wave of engineers who bring holistic thinking to the design of technology products.”

The info is here.

Tuesday, November 20, 2018

Moral leaders perform better, but what’s ‘moral’ is up for debate

Matthew Biddle
State University of New York - Buffalo - Pressor
Originally released October 22, 2018

New research from the University at Buffalo School of Management is clear: Leaders who value morality outperform their unethical peers, regardless of industry, company size or role. However, because we all define a “moral leader” differently, leaders who try to do good may face unexpected difficulties.

Led by Jim Lemoine, PhD, assistant professor of organization and human resources, the research team examined more than 300 books, essays and studies on moral leadership from 1970-2018. They discovered that leaders who prioritized morality had higher performing organizations with less turnover, and that their employees were more creative, proactive, engaged and satisfied.

A pre-press version of the study appeared online this month ahead of publication in the Academy of Management Annals in January 2019.

“Over and over again, our research found that followers perceived ethical leaders as more effective and trusted, and those leaders enjoyed greater personal well-being than managers with questionable morality,” Lemoine says. “The problem is, though, that when we talk about an ‘ethical business leader,’ we’re often not talking about the same person.”

The pressor is here.

The research is here.

Abstract
Moral forms of leadership such as ethical, authentic, and servant leadership have seen a surge of interest in the 21st century. The proliferation of morally-based leadership approaches has resulted in theoretical confusion and empirical overlap that mirror substantive concerns within the larger leadership domain. Our integrative review of this literature reveals connections with moral philosophy that provide a useful framework to better differentiate the specific moral content (i.e., deontology, virtue ethics, and consequentialism) that undergirds ethical, authentic, and servant leadership respectively. Taken together, this integrative review clarifies points of integration and differentiation among moral approaches to leadership and delineates avenues for future research that promise to build complementary rather than redundant knowledge regarding how moral approaches to leadership inform the broader leadership domain.

How tech employees are pushing Silicon Valley to put ethics before profit

Alexia Fernández Campbell
vox.com
Originally published October 18, 2018

The chorus of tech workers demanding American tech companies put ethics before profit is growing louder.

In recent days, employees at Google and Microsoft have been pressuring company executives to drop bids for a $10 billion contract to provide cloud computing services to the Department of Defense.

As part of the contract, known as JEDI, engineers would build cloud storage for military data; there are few public details about what else it would entail. But one thing is clear: The project would involve using artificial intelligence to make the US military a lot deadlier.

“This program is truly about increasing the lethality of our department and providing the best resources to our men and women in uniform,” John Gibson, chief management officer at the Defense Department, said at a March industry event about JEDI.

Thousands of Google employees reportedly pressured the company to drop its bid for the project, and many had said they would refuse to work on it. They pointed out that such work may violate the company’s new ethics policy on the use of artificial intelligence. Google has pledged not to use AI to make “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” a policy company employees had pushed for.

The info is here.

Monday, November 19, 2018

Why Facts Don’t Change Our Minds

James Clear
www.jamesclear.com
Undated

Facts Don't Change Our Minds. Friendship Does.

Convincing someone to change their mind is really the process of convincing them to change their tribe. If they abandon their beliefs, they run the risk of losing social ties. You can’t expect someone to change their mind if you take away their community too. You have to give them somewhere to go. Nobody wants their worldview torn apart if loneliness is the outcome.

The way to change people’s minds is to become friends with them, to integrate them into your tribe, to bring them into your circle. Now, they can change their beliefs without the risk of being abandoned socially.

The British philosopher Alain de Botton suggests that we simply share meals with those who disagree with us:
“Sitting down at a table with a group of strangers has the incomparable and odd benefit of making it a little more difficult to hate them with impunity. Prejudice and ethnic strife feed off abstraction. However, the proximity required by a meal – something about handing dishes around, unfurling napkins at the same moment, even asking a stranger to pass the salt – disrupts our ability to cling to the belief that the outsiders who wear unusual clothes and speak in distinctive accents deserve to be sent home or assaulted. For all the large-scale political solutions which have been proposed to salve ethnic conflict, there are few more effective ways to promote tolerance between suspicious neighbours than to force them to eat supper together.” 
Perhaps it is not difference, but distance that breeds tribalism and hostility. As proximity increases, so does understanding. I am reminded of Abraham Lincoln's quote, “I don't like that man. I must get to know him better.”

Facts don't change our minds. Friendship does.

The Link Between Self-Dehumanization and Immoral Behavior


Association for Psychological Science

Here is an excerpt:

After establishing that unethical behavior can increase self-dehumanization, the researchers then carried out a second set of studies designed to test whether self-dehumanization may also lead to unethical behavior. Across all three studies, participants completed the writing assignment described above before deliberating over a moral choice. Those in the self-dehumanization condition were found both to cheat more when asked to self-report the results of a coin flip in exchange for cash, and to assign partners to the more difficult of two available tasks.

Finally, the researchers tested their full model of self-dehumanization on a sample of 429 students. Participants first predicted a series of seemingly random coin flips. Unbeknownst to them, the results in the neutral condition were rigged so that the participants’ predictions always matched the coin flips. In the possibility-of-cheating condition, however, the results were rigged to always be inconsistent with the coin flips, followed by an erroneous message announcing that they had guessed correctly.

Participants in the cheating condition then had a choice: they could either click a box on the screen to report the “technical issue,” or they could collect their ill-begotten $2 reward for an accurate prediction. Of the 293 participants in this condition, 134 chose to take the money.

After reporting their levels of self-dehumanization and completing a filler task, participants then completed an anagram test in which those in the possibility-of-cheating condition had another chance to misreport their results. As the researchers suspected, people who chose to take money they did not deserve in the first task reported higher self-dehumanization, and were more likely to cheat in the final task as well.

The information is here.

Kouchaki, M., Dobson, K. S., Waytz, A., & Kteily, N. S. (2018). The Link Between Self-Dehumanization and Immoral Behavior. Psychological Science, 29(8), 1234-1246. doi:10.1177/0956797618760784

Sunday, November 18, 2018

Bornstein claims Trump dictated the glowing health letter

Alex Marquardt and Lawrence Crook
CNN.com
Originally posted May 2, 2018

When Dr. Harold Bornstein described in hyperbolic prose then-candidate Donald Trump's health in 2015, the language he used was eerily similar to the style preferred by his patient.

It turns out the patient himself wrote it, according to Bornstein.

"He dictated that whole letter. I didn't write that letter," Bornstein told CNN on Tuesday. "I just made it up as I went along."

The admission is an about face from his answer more than two years when the letter was released and answers one of the lingering questions about the last presidential election. The letter thrust the eccentric Bornstein, with his shoulder-length hair and round eyeglasses, into public view.

"His physical strength and stamina are extraordinary," he crowed in the letter, which was released by Trump's campaign in December 2015. "If elected, Mr. Trump, I can state unequivocally, will be the healthiest individual ever elected to the presidency."

The missive didn't offer much medical evidence for those claims beyond citing a blood pressure of 110/65, described by Bornstein as "astonishingly excellent." It claimed Trump had lost 15 pounds over the preceding year. And it described his cardiovascular health as "excellent."

The info is here.

Dartmouth Allowed 3 Professors to Sexually Harass and Assault Students, Lawsuit Charges

Nell Gluckman
The Chronicle of Higher Education
Originally published November 15, 2018

Seven current and former students sued Dartmouth College on Thursday, saying it had failed to protect them from three psychology and brain-science professors who sexually harassed and assaulted them. In the lawsuit, filed in a federal court in New Hampshire, they say that when they and others reported horrific treatment, the college did nothing, allowing the professors’ behavior to continue until last spring, when one retired and the other two resigned.

The 72-page complaint, which seeks class-action status, describes an academic department where heavy drinking, misogyny, and sexual harassment were normalized. It says that the three professors — Todd F. Heatherton, William M. Kelley, and Paul J. Whalen — “leered at, groped, sexted,” and “intoxicated” students. One former student alleges she was raped by Kelley, and a current student alleges she was raped by Whalen. Dartmouth ended a Title IX investigation after the professors left, and, as far as the complainants could tell, did not attempt to examine how the abuse occurred or how it could be prevented it from happening again, according to the complaint.

In a written statement, a Dartmouth spokesman said that college officials “respectfully but strongly disagree with the characterizations of Dartmouth’s actions in the complaint and will respond through our own court filings.”

The info is here.

Saturday, November 17, 2018

The New Age of Patient Autonomy: Implications for the Patient-Physician Relationship

Madison Kilbride and Steven Joffe
JAMA. Published online October 15, 2018.

Here is an excerpt:

The New Age of Patient Autonomy

The abandonment of strong medical paternalism led scholars to explore alternative models of the patient-physician relationship that emphasize patient choice. Shared decision making gained traction in the 1980s and remains the preferred model for health care interactions. Broadly, shared decision making involves the physician and patient working together to make medical decisions that accord with the patient’s values and preferences. Ideally, for many decisions, the physician and patient engage in an informational volley—the physician provides information about the range of options, and the patient expresses his or her values and preferences. In some cases, the physician may need to help the patient identify or clarify his or her values and goals of care in light of the available treatment options.

Although there is general consensus that patients should participate in and ultimately make their own medical decisions whenever possible, most versions of shared decision making take for granted that the physician has access to knowledge, understanding, and medical resources that the patient lacks. As such, the shift from medical paternalism to patient autonomy did not wholly transform the physician’s role in the therapeutic relationship.

In recent years, however, widespread access to the internet and social media has reduced physicians’ dominion over medical information and, increasingly, over patients’ access to medical products and services. It is no longer the case that patients simply visit their physicians, describe their symptoms, and wait for the differential diagnosis. Today, some patients arrive at the physician’s office having thoroughly researched their symptoms and identified possible diagnoses. Indeed, some patients who have lived with rare diseases may even know more about their conditions than some of the physicians with whom they consult.

The info is here.

Friday, November 16, 2018

Re-thinking Data Protection Law in the Age of Big Data and AI

Sandra Wachter and Brent Mittelstadt
Oxford Internet Institute
Originally published October 11, 2018

Numerous applications of ‘Big Data analytics’ drawing potentially troubling inferences about individuals and groups have emerged in recent years.  Major internet platforms are behind many of the highest profile examples: Facebook may be able to infer protected attributes such as sexual orientation, race, as well as political opinions and imminent suicide attempts, while third parties have used Facebook data to decide on the eligibility for loans and infer political stances on abortion. Susceptibility to depression can similarly be inferred via usage data from Facebook and Twitter. Google has attempted to predict flu outbreaks as well as other diseases and their outcomes. Microsoft can likewise predict Parkinson’s disease and Alzheimer’s disease from search engine interactions. Other recent invasive applications include prediction of pregnancy by Target, assessment of users’ satisfaction based on mouse tracking, and China’s far reaching Social Credit Scoring system.

Inferences in the form of assumptions or predictions about future behaviour are often privacy-invasive, sometimes counterintuitive and, in any case, cannot be verified at the time of decision-making. While we are often unable to predict, understand or refute these inferences, they nonetheless impact on our private lives, identity, reputation, and self-determination.

These facts suggest that the greatest risks of Big Data analytics do not stem solely from how input data (name, age, email address) is used. Rather, it is the inferences that are drawn about us from the collected data, which determine how we, as data subjects, are being viewed and evaluated by third parties, that pose the greatest risk. It follows that protections designed to provide oversight and control over how data is collected and processed are not enough; rather, individuals require meaningful protection against not only the inputs, but the outputs of data processing.

The information is here.

Motivated misremembering: Selfish decisions are more generous in hindsight

Ryan Carlson, Michel Marechal, Bastiaan Oud, Ernst Fehr, & Molly Crockett
PsyArXiv
Created on: July 22, 2018 | Last edited: July 22, 2018

Abstract

People often prioritize their own interests, but also like to see themselves as moral. How do individuals resolve this tension? One way to both maximize self-interest and maintain a moral self-image is to misremember the extent of one’s selfishness. Here, we tested this possibility. Across three experiments, participants decided how to split money with anonymous partners, and were later asked to recall their decisions. Participants systematically recalled being more generous in the past than they actually were, even when they were incentivized to recall accurately. Crucially, this effect was driven by individuals who gave less than what they personally believed was fair, independent of how objectively selfish they were. Our findings suggest that when people’s actions fall short of their own personal standards, they may misremember the extent of their selfishness, thereby warding off negative emotions and threats to their moral self-image.

Significance statement

Fairness is widely endorsed in human societies, but less often practiced. Here we demonstrate how memory distortions may contribute to this discrepancy. Across three experiments (N = 1005), we find that people consistently remember being more generous in the past than they actually were. We show that this effect occurs specifically for individuals whose decisions fell below their own fairness standards, irrespective of how high or low those standards were. These findings suggest that when people perceive their own actions as selfish, they can remember having acted more equitably, thus minimizing guilt and preserving their self-image.

The research is here.

Thursday, November 15, 2018

The Impact of Leader Moral Humility on Follower Moral Self-Efficacy and Behavior

Owens, B. P., Yam, K. C., Bednar, J. S., Mao, J., & Hart, D. W.
Journal of Applied Psychology. (2018)

Abstract

This study utilizes social–cognitive theory, humble leadership theory, and the behavioral ethics literature to theoretically develop the concept of leader moral humility and its effects on followers. Specifically, we propose a theoretical model wherein leader moral humility and follower implicit theories about morality interact to predict follower moral efficacy, which in turn increases follower prosocial behavior and decreases follower unethical behavior. We furthermore suggest that these effects are strongest when followers hold an incremental implicit theory of morality (i.e., believing that one’s morality is malleable). We test and find support for our theoretical model using two multiwave studies with Eastern (Study 1) and Western (Study 2) samples. Furthermore, we demonstrate that leader moral humility predicts follower moral efficacy and moral behaviors above and beyond the effects of ethical leadership and leader general humility.

Here is the conclusion:

We introduced the construct of leader moral humility and theorized its effects on followers. Two studies with samples from both Eastern and Western cultures provided empirical support that leader moral humility enhances followers’ moral self-efficacy, which in turn leads to increased prosocial behavior and decreased unethical behavior. We further demonstrated that these effects depend on followers’ implicit theories of the malleability of morality. More important, we found that these effects were above and beyond the influences of general humility, ethical leadership, LMX, and ethical norms of conduct, providing support for the theoretical and practical importance of this new leadership construct. Our model is based on the general proposal that we need followers who believe in and leaders who model ongoing moral development. We hope that the current research inspires further exploration regarding how leaders and followers interact to shape and facilitate a more ethical workplace.

The article is here.

Expectations Bias Moral Evaluations

Derek Powell & Zachary Horne
PsyArXiv
Originally posted September 13, 2018

Abstract

People’s expectations play an important role in their reactions to events. There is often disappointment when events fail to meet expectations and a special thrill to having one’s expectations exceeded. We propose that expectations influence evaluations through information-theoretic principles: less expected events do more to inform us about the state of the world than do more expected events. An implication of this proposal is that people may have inappropriately muted responses to morally significant but expected events. In two preregistered experiments, we found that people’s judgments of morally-significant events were affected by the likelihood of that event. People were more upset about events that were unexpected (e.g., a robbery at a clothing store) than events that were more expected (e.g., a robbery at a convenience store). We argue that this bias has pernicious moral consequences, including leading to reduced concern for victims in most need of help.

The research/preprint is here.

Wednesday, November 14, 2018

Moral resilience: how to navigate ethical complexity in clinical practice

Cynda Rushton
Oxford University Press
Originally posted October 12, 2018

Clinicians are constantly confronted with ethical questions. Recent examples of healthcare workers caught up in high-profile best-interest cases are on the rise, but decisions regarding the allocation of the clinician’s time and skills, or scare resources such as organs and medication, are everyday occurrences. The increasing pressure of “doing more with less” is one that can take its toll.

Dr Cynda Rushton is a professor of clinical ethics, and a proponent of ‘moral resilience’ as a pathway through which clinicians can lessen their experience of moral distress, and navigate the contentious issues they may face with a greater sense of integrity. In the video series below, she provides the guiding principles of moral resilience, and explores how they can be put into practice.



The videos are here.

Keeping Human Stories at the Center of Health Care

M. Bridget Duffy
Harvard Business Review
Originally published October 8, 2018

Here is an excerpt:

A mentor told me early in my career that only 20% of healing involves the high-tech stuff. The remaining 80%, he said, is about the relationships we build with patients, the physical environments we create, and the resources we provide that enable patients to tap into whatever they need for spiritual sustenance. The longer I work in health care, the more I realize just how right he was.

How do we get back to the 80-20 rule? By placing the well-being of patients and care teams at the top of the list for every initiative we undertake and every technology we introduce. Rather than just introducing technology with no thought as to its impact on clinicians — as happened with many rollouts of electronic medical records (EMRs) — we need to establish a way to quantifiably measure whether a new technology actually improves a clinician’s workday and ability to deliver care or simply creates hassles and inefficiency. Let’s develop an up-front “technology ROI” that measures workflow impact, inefficiency, hassle and impact on physician and nurse well-being.

The National Taskforce for Humanity in Healthcare, of which I am a founding member, is piloting a system of metrics for well-being developed by J. Bryan Sexton of Duke University Medical Center. Instead of measuring burnout or how broken health care people are, Dr. Sexton’s metrics focus on emotional thriving and emotional resilience. (The former are how strongly people agree or disagree to these statements: “I have a chance to use my strengths every day at work,” “I feel like I am thriving at my job,” “I feel like I am making a meaningful difference at my job,” and “I often have something that I am very looking forward to at my job.”

The info is here.

Tuesday, November 13, 2018

Mozilla’s ambitious plan to teach coders not to be evil

Katherine Schwab
Fast Company
Originally published October 10, 2018

Here is an excerpt:

There’s already a burgeoning movement to integrate ethics into the computer science classroom. Harvard and MIT have launched a joint class on the ethics of AI. UT Austin has an ethics class for computer science majors that it plans to eventually make a requirement. Stanford similarly is developing an ethics class within its computer science department. But many of these are one-off initiatives, and a national challenge of this type will provide the resources and incentive for more universities to think about these questions–and theoretically help the best ideas scale across the country.

Still, Baker says she’s sometimes cynical about how much impact ethics classes will have without broader social change. “There’s a lot of power and institutional pressure and wealth” in making decisions that are good for business, but might be bad for humanity, Baker says. “The fact you had some classes in ethics isn’t going to overcome all that and make things perfect. People have many motivations.”

Even so, teaching young people how to think about tech’s implications with nuance could help to combat some of those other motivations–primarily, money. The conversation shouldn’t be as binary as code; it should acknowledge typical ways data is used and help young technologists talk and think about the difference between providing value and being invasive.

The info is here.

Delusions and Three Myths of Irrational Belief

Bortolotti L. (2018) Delusions and Three Myths of Irrational Belief.
In: Bortolotti L. (eds) Delusions in Context. Palgrave Macmillan, Cham

Abstract

This chapter addresses the contribution that the delusion literature has made to the philosophy of belief. Three conclusions will be drawn: (1) a belief does not need to be epistemically rational to be used in the interpretation of behaviour; (2) a belief does not need to be epistemically rational to have significant psychological or epistemic benefits; (3) beliefs exhibiting the features of epistemic irrationality exemplified by delusions are not infrequent, and they are not an exception in a largely rational belief system. What we learn from the delusion literature is that there are complex relationships between rationality and interpretation, rationality and success, and rationality and knowledge.

The chapter is here.

Here is a portion of the Conclusion:

Second, it is not obvious that epistemically irrational beliefs should be corrected, challenged, or regarded as a glitch in an otherwise rational belief system. The whole attitude towards such beliefs should change. We all have many epistemically irrational beliefs, and they are not always a sign that we lack credibility or we are mentally unwell. Rather, they are predictable features of human cognition (Puddifoot and Bortolotti, 2018). We are not unbiased in the way we weigh up evidence and we tend to be conservative once we have adopted a belief, making it hard for new contrary evidence to unsettle our existing convictions. Some delusions are just a vivid illustration of a general tendency that is widely shared and hard to counteract. Delusions, just like more common epistemically irrational beliefs, may be a significant obstacle to the achievements of our goals and may cause a rift between our way of seeing the world and other people’s way. That is why it is important to develop a critical attitude towards their content.

Monday, November 12, 2018

7 Ways Marketers Can Use Corporate Morality to Prepare for Future Data Privacy Laws

Patrick Hogan
Adweek.com
Originally posted October 10, 2018

Here is an excerpt:

Many organizations have already made responsible adjustments in how they communicate with users about data collection and use and have become compliant to support recent laws. However, compliance does not always equal responsibility, and even though companies do require consent and provide information as required, linking to the terms of use, clicking a checkbox or double opting-in still may not be enough to stay ahead or protect consumers.

The best way to reduce the impact of the potential legislation is to take proactive steps now that set a new standard of responsibility in data use for your organization. Below are some measurable ways marketers can lead the way for the changing industry and creating a foundational perception shift away from data and back to the acknowledgment of putting other humans first.

Create an action plan for complete data control and transparency

Set standards and protocols for your internal teams to determine how you are going to communicate with each other and your clients about data privacy, thus creating a path for all employees to follow and abide by moving forward.

Map data in your organization from receipt to storage to expulsion

Accountability is key. As a business, you should be able to know and speak to what is being done with the data that you are collecting throughout each stage of the process.

The info is here.

Optimality bias in moral judgment

Julian De Freitas and Samuel G. B. Johnson
Journal of Experimental Social Psychology
Volume 79, November 2018, Pages 149-163

Abstract

We often make decisions with incomplete knowledge of their consequences. Might people nonetheless expect others to make optimal choices, despite this ignorance? Here, we show that people are sensitive to moral optimality: that people hold moral agents accountable depending on whether they make optimal choices, even when there is no way that the agent could know which choice was optimal. This result held up whether the outcome was positive, negative, inevitable, or unknown, and across within-subjects and between-subjects designs. Participants consistently distinguished between optimal and suboptimal choices, but not between suboptimal choices of varying quality — a signature pattern of the Efficiency Principle found in other areas of cognition. A mediation analysis revealed that the optimality effect occurs because people find suboptimal choices more difficult to explain and assign harsher blame accordingly, while moderation analyses found that the effect does not depend on tacit inferences about the agent's knowledge or negligence. We argue that this moral optimality bias operates largely out of awareness, reflects broader tendencies in how humans understand one another's behavior, and has real-world implications.

The research is here.

Sunday, November 11, 2018

Nine risk management lessons for practitioners.

Taube, Daniel O.,Scroppo, Joe,Zelechoski, Amanda D.
Practice Innovations, Oct 04 , 2018

Abstract

Risk management is an essential skill for professionals and is important throughout the course of their careers. Effective risk management blends a utilitarian focus on the potential costs and benefits of particular courses of action, with a solid foundation in ethical principles. Awareness of particularly risk-laden circumstances and practical strategies can promote safer and more effective practice. This article reviews nine situations and their associated lessons, illustrated by case examples. These situations emerged from our experience as risk management consultants who have listened to and assisted many practitioners in addressing the challenges they face on a day-to-day basis. The lessons include a focus on obtaining consent, setting boundaries, flexibility, attention to clinician affect, differentiating the clinician’s own values and needs from those of the client, awareness of the limits of competence, maintaining adequate legal knowledge, keeping good records, and routine consultation. We highlight issues and approaches to consider in these types of cases that minimize risks of adverse outcomes and enhance good practice.

The info is here.

Here is a portion of the article:

Being aware of basic legal parameters can help clinicians to avoid making errors in this complex arena. Yet clinicians are not usually lawyers and tend to have only limited legal knowledge. This gives rise to a risk of assuming more mastery than one may have.

Indeed, research suggests that a range of professionals, including psychotherapists, overestimate their capabilities and competencies, even in areas in which they have received substantial training (Creed, Wolk, Feinberg, Evans, & Beck, 2016; Lipsett, Harris, & Downing, 2011; Mathieson, Barnfield, & Beaumont, 2009; Walfish, McAlister, O’Donnell, & Lambert, 2012).

Saturday, November 10, 2018

Association Between Physician Burnout and Patient Safety, Professionalism, and Patient Satisfaction

Maria Panagioti, Keith Geraghty, Judith Johnson
JAMA Intern Med. 2018;178(10):1317-1330.
doi:10.1001/jamainternmed.2018.3713

Abstract

Objective  To examine whether physician burnout is associated with an increased risk of patient safety incidents, suboptimal care outcomes due to low professionalism, and lower patient satisfaction.

Data Sources  MEDLINE, EMBASE, PsycInfo, and CINAHL databases were searched until October 22, 2017, using combinations of the key terms physicians, burnout, and patient care. Detailed standardized searches with no language restriction were undertaken. The reference lists of eligible studies and other relevant systematic reviews were hand-searched.

Study Selection  Quantitative observational studies.

Data Extraction and Synthesis  Two independent reviewers were involved. The main meta-analysis was followed by subgroup and sensitivity analyses. All analyses were performed using random-effects models. Formal tests for heterogeneity (I2) and publication bias were performed.

Main Outcomes and Measures  The core outcomes were the quantitative associations between burnout and patient safety, professionalism, and patient satisfaction reported as odds ratios (ORs) with their 95% CIs.

Results  Of the 5234 records identified, 47 studies on 42 473 physicians (25 059 [59.0%] men; median age, 38 years [range, 27-53 years]) were included in the meta-analysis. Physician burnout was associated with an increased risk of patient safety incidents (OR, 1.96; 95% CI, 1.59-2.40), poorer quality of care due to low professionalism (OR, 2.31; 95% CI, 1.87-2.85), and reduced patient satisfaction (OR, 2.28; 95% CI, 1.42-3.68). The heterogeneity was high and the study quality was low to moderate. The links between burnout and low professionalism were larger in residents and early-career (≤5 years post residency) physicians compared with middle- and late-career physicians (Cohen Q = 7.27; P = .003). The reporting method of patient safety incidents and professionalism (physician-reported vs system-recorded) significantly influenced the main results (Cohen Q = 8.14; P = .007).

Conclusions and Relevance  This meta-analysis provides evidence that physician burnout may jeopardize patient care; reversal of this risk has to be viewed as a fundamental health care policy goal across the globe. Health care organizations are encouraged to invest in efforts to improve physician wellness, particularly for early-career physicians. The methods of recording patient care quality and safety outcomes require improvements to concisely capture the outcome of burnout on the performance of health care organizations.

Friday, November 9, 2018

Believing without evidence is always morally wrong

Francisco Mejia Uribe
aeon.co
Originally posted November 5, 2018

Here are two excerpts:

But it is not only our own self-preservation that is at stake here. As social animals, our agency impacts on those around us, and improper believing puts our fellow humans at risk. As Clifford warns: ‘We all suffer severely enough from the maintenance and support of false beliefs and the fatally wrong actions which they lead to …’ In short, sloppy practices of belief-formation are ethically wrong because – as social beings – when we believe something, the stakes are very high.

(cut)

Translating Clifford’s warning to our interconnected times, what he tells us is that careless believing turns us into easy prey for fake-news peddlers, conspiracy theorists and charlatans. And letting ourselves become hosts to these false beliefs is morally wrong because, as we have seen, the error cost for society can be devastating. Epistemic alertness is a much more precious virtue today than it ever was, since the need to sift through conflicting information has exponentially increased, and the risk of becoming a vessel of credulity is just a few taps of a smartphone away.

Clifford’s third and final argument as to why believing without evidence is morally wrong is that, in our capacity as communicators of belief, we have the moral responsibility not to pollute the well of collective knowledge. In Clifford’s time, the way in which our beliefs were woven into the ‘precious deposit’ of common knowledge was primarily through speech and writing. Because of this capacity to communicate, ‘our words, our phrases, our forms and processes and modes of thought’ become ‘common property’. Subverting this ‘heirloom’, as he called it, by adding false beliefs is immoral because everyone’s lives ultimately rely on this vital, shared resource.

The info is here.