Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, philosophy and health care

Monday, December 17, 2018

How Wilbur Ross Lost Millions, Despite Flouting Ethics Rules

Dan Alexander
Forbes.com
Originally published December 14, 2018

Here is an excerpt:

By October 2017, Ross was out of time to divest. In his ethics agreement, he said he would get rid of the funds in the first 180 days after his confirmation—or if not, during a 60-day extension period. So on October 25, exactly 240 days after his confirmation, Ross sold part of his interests to funds managed by Goldman Sachs. Given that he waited until the last possible day to legally divest the assets, it seems certain that he ended up selling at a discount.

The very next day, on October 26, 2017, a reporter for the New York Times contacted Ross with a list of questions about his ties to Navigator, the Putin-linked company. Before the story was published, Ross took out a short position against Navigator—essentially betting that the company’s stock would go down. When the story finally came out, on November 5, 2017, the stock did not plummet initially, but it did creep down 4% by the time Ross closed the short position 11 days later, apparently bolstering his fortune by $3,000 to $10,000.

On November 1, 2017, the day after Ross shorted Navigator, he signed a sworn statement that he had divested everything he previously told federal ethics officials he would. But that was not true. In fact, Ross still owned more than $10 million worth of stock in Invesco, the parent company of his former private equity firm. The next month, he sold those shares, pocketing at least $1.2 million more than he would have if he sold when he first promised to.

Am I a Hypocrite? A Philosophical Self-Assessment

John Danaher
Philosophical Disquisitions
Originally published November 9, 2018

Here are two excerpts:

The common view among philosophers is that hypocrisy is a moral failing. Indeed, it is often viewed as one of the worst moral failings. Why is this? Christine McKinnon’s article ‘Hypocrisy, with a Note on Integrity’ provides a good, clear defence of this view. The article itself is a classic exercise in analytical philosophical psychology. It tries to clarify the structure of hypocrisy and explain why we should take it so seriously. It does so by arguing that there are certain behaviours, desires and dispositions that are the hallmark of the hypocrite and that these behaviours, desires and dispositions undermine our system of social norms.

McKinnon makes this case by considering some paradigmatic instances of hypocrisy, and identifying the necessary and sufficient conditions that allow us to label these as instances of hypocrisy. My opening example of my email behaviour probably fits this paradigmatic mode — despite my protestations to the contrary. A better example, however, might be religious hypocrisy. There have been many well-documented historical cases of this, but let’s not focus on these. Let’s instead imagine a case that closely parallels these historical examples. Suppose there is a devout fundamentalist Christian preacher. He regularly preaches about the evils of homosexuality and secularism and professes to be heterosexual and devout. He calls upon parents to disown their homosexual children or to subject them to ‘conversion therapy’. Then, one day, this preacher is discovered to himself be a homosexual. Not just that, it turns out he has a long-term male partner that he has kept hidden from the public for over 20 years, and that they were recently married in a non-religious humanist ceremony.

(cut)

In other words, what I refer to as my own hypocrisy seems to involve a good deal of self-deception and self-manipulation, not (just) the manipulation of others. That’s why I was relieved to read Michael Statman’s article on ‘Hypocrisy and Self-Deception’. Statman wants to get away from the idea of the hypocrite as moral cartoon character. Real people are way more interesting than that. As he sees it, the morally vicious form of hypocrisy that is the focus of McKinnon’s ire tends to overlap with and blur into self-deception much more frequently than she allows. The two things are not strongly dichotomous. Indeed, people can slide back and forth between them with relative ease: the self-deceived can slide into hypocrisy and the hypocrite can slide into self-deception.

Although I am attracted to this view, Statman points out that it is a tough sell. 

Sunday, December 16, 2018

Institutional Conflicts of Interest and Public Trust

Francisco G. Cigarroa, Bettie Sue Masters, Dan Sharphorn
JAMA. Published online November 14, 2018.
doi:10.1001/jama.2018.18482

Here is an excerpt:

It is no longer enough for institutions conducting research to only have conflict of interest policies for individual researchers, they also must directly address the growing concern about institutional conflicts of interest. Every research institution and university deserving of the public’s trust needs to have well-defined institutional conflict of interest policies. A process must be established that will ensure research is untainted by any personal financial interests of the researcher, and that no financial interests exist for the institution or the institution’s key decision makers that could cloud otherwise open and honest decisions regarding the institution’s research mission.

Education and culture are fundamental to the successful implementation of any policy. It is incumbent upon institutional decision makers and all employees involved in research to be knowledgeable about individual and institutional conflict of interest policies. It may not always be obvious to researchers that they have a perceived or real conflict of interest or bias. Therefore, it is important to establish a culture of transparency and disclosure of any outside interests that could potentially influence research and include individuals at the highest level of the institution. Policies should be clear and easy to implement and permit pathways to provide disclosure with adequate explanation, as well as information regarding how potential or real conflicts of interest are managed or eliminated. This will require the establishment of interactive databases aimed at mitigating, to the extent possible, both individual and institutional conflicts of interest.

Policies alone are not sufficient to protect an institution from conflicts of interest. Institutional compliance toward these policies and dedication toward establishing processes by which to identify, resolve, or eliminate institutional conflicts of interest are necessary. Institutions and their respective boards of trustees should be prepared to address sensitive situations when a supervisor, executive leader, or trustee is identified as contributing to an institutional conflict of interest and be prepared to direct specific actions to resolve such conflict. In this regard, it would be prudent for governance to establish an institutional conflicts of interest committee with sufficient authority to manage or eliminate perceived or real conflicts of interest affecting the institution.

Saturday, December 15, 2018

What is ‘moral distress’? A narrative synthesis of the literature

Georgina Morley, Jonathan Ives, Caroline Bradbury-Jones, & Fiona Irvine
Nursing Ethics
First Published October 8, 2017 Review Article  

Introduction

The concept of moral distress (MD) was introduced to nursing by Jameton who defined MD as arising, ‘when one knows the right thing to do, but institutional constraints make it nearly impossible to pursue the right course of action’. MD has subsequently gained increasing attention in nursing research, the majority of which conducted in North America but now emerging in South America, Europe, the Middle East and Asia. Studies have highlighted the deleterious effects of MD, with correlations between higher levels of MD, negative perceptions of ethical climate and increased levels of compassion fatigue among nurses. Consensus is that MD can negatively impact patient care, causing nurses to avoid certain clinical situations and ultimately leave the profession. MD is therefore a significant problem within nursing, requiring investigation, understanding, clarification and responses. The growing body of MD research, however, is arguably failing to bring the required clarification but rather has complicated attempts to study it. The increasing number of cited causes and effects of MD means the term has expanded to the point that according to Hanna and McCarthy and Deady, it is becoming an ‘umbrella term’ that lacks conceptual clarity referring unhelpfully to a wide range of phenomena and causes. Without, however, a coherent and consistent conceptual understanding, empirical studies of MD’s prevalence, effects, and possible responses are likely to be confused and contradictory.

A useful starting point is a systematic exploration of existing literature to critically examine definitions and understandings currently available, interrogating their similarities, differences, conceptual strengths and weaknesses. This article presents a narrative synthesis that explored proposed necessary and sufficient conditions for MD, and in doing so, this article also identifies areas of conceptual tension and agreement.

Friday, December 14, 2018

Don’t Want to Fall for Fake News? Don’t Be Lazy

Robbie Gonzalez
www.wired.com
Originally posted November 9, 2018

Here are two excerpts:

Misinformation researchers have proposed two competing hypotheses for why people fall for fake news on social media. The popular assumption—supported by research on apathy over climate change and the denial of its existence—is that people are blinded by partisanship, and will leverage their critical-thinking skills to ram the square pegs of misinformation into the round holes of their particular ideologies. According to this theory, fake news doesn't so much evade critical thinking as weaponize it, preying on partiality to produce a feedback loop in which people become worse and worse at detecting misinformation.

The other hypothesis is that reasoning and critical thinking are, in fact, what enable people to distinguish truth from falsehood, no matter where they fall on the political spectrum. (If this sounds less like a hypothesis and more like the definitions of reasoning and critical thinking, that's because they are.)

(cut)

All of which suggests susceptibility to fake news is driven more by lazy thinking than by partisan bias. Which on one hand sounds—let's be honest—pretty bad. But it also implies that getting people to be more discerning isn't a lost cause. Changing people's ideologies, which are closely bound to their sense of identity and self, is notoriously difficult. Getting people to think more critically about what they're reading could be a lot easier, by comparison.

Then again, maybe not. "I think social media makes it particularly hard, because a lot of the features of social media are designed to encourage non-rational thinking." Rand says. Anyone who has sat and stared vacantly at their phone while thumb-thumb-thumbing to refresh their Twitter feed, or closed out of Instagram only to re-open it reflexively, has experienced firsthand what it means to browse in such a brain-dead, ouroboric state. Default settings like push notifications, autoplaying videos, algorithmic news feeds—they all cater to humans' inclination to consume things passively instead of actively, to be swept up by momentum rather than resist it.

The info is here.

Why Health Professionals Should Speak Out Against False Beliefs on the Internet

Joel T. Wu and Jennifer B. McCormick
AMA J Ethics. 2018;20(11):E1052-1058.
doi: 10.1001/amajethics.2018.1052.

Abstract

Broad dissemination and consumption of false or misleading health information, amplified by the internet, poses risks to public health and problems for both the health care enterprise and the government. In this article, we review government power for, and constitutional limits on, regulating health-related speech, particularly on the internet. We suggest that government regulation can only partially address false or misleading health information dissemination. Drawing on the American Medical Association’s Code of Medical Ethics, we argue that health care professionals have responsibilities to convey truthful information to patients, peers, and communities. Finally, we suggest that all health care professionals have essential roles in helping patients and fellow citizens obtain reliable, evidence-based health information.

Here is an excerpt:

We would suggest that health care professionals have an ethical obligation to correct false or misleading health information, share truthful health information, and direct people to reliable sources of health information within their communities and spheres of influence. After all, health and well-being are values shared by almost everyone. Principle V of the AMA Principles of Ethics states: “A physician shall continue to study, apply, and advance scientific knowledge, maintain a commitment to medical education, make relevant information available to patients, colleagues, and the public, obtain consultation, and use the talents of other health professionals when indicated” (italics added). And Principle VII states: “A physician shall recognize a responsibility to participate in activities contributing to the improvement of the community and the betterment of public health” (italics added). Taken together, these principles articulate an ethical obligation to make relevant information available to the public to improve community and public health. In the modern information age, wherein the unconstrained and largely unregulated proliferation of false health information is enabled by the internet and medical knowledge is no longer privileged, these 2 principles have a special weight and relevance.

Thursday, December 13, 2018

Does deciding among morally relevant options feel like making a choice? How morality constrains people’s sense of choice

Kouchaki, M., Smith, I. H., & Savani, K. (2018).
Journal of Personality and Social Psychology, 115(5), 788-804.
http://dx.doi.org/10.1037/pspa0000128

Abstract

We demonstrate that a difference exists between objectively having and psychologically perceiving multiple-choice options of a given decision, showing that morality serves as a constraint on people’s perceptions of choice. Across 8 studies (N = 2,217), using both experimental and correlational methods, we find that people deciding among options they view as moral in nature experience a lower sense of choice than people deciding among the same options but who do not view them as morally relevant. Moreover, this lower sense of choice is evident in people’s attentional patterns. When deciding among morally relevant options displayed on a computer screen, people devote less visual attention to the option that they ultimately reject, suggesting that when they perceive that there is a morally correct option, they are less likely to even consider immoral options as viable alternatives in their decision-making process. Furthermore, we find that experiencing a lower sense of choice because of moral considerations can have downstream behavioral consequences: after deciding among moral (but not nonmoral) options, people (in Western cultures) tend to choose more variety in an unrelated task, likely because choosing more variety helps them reassert their sense of choice. Taken together, our findings suggest that morality is an important factor that constrains people’s perceptions of choice, creating a disjunction between objectively having a choice and subjectively perceiving that one has a choice.

A pdf can be found here.

A choice may not feel like a choice when morality is at play

Susan Kelley
Cornell Chronicle
Originally posted November 15, 2018

Here is an excerpt:

People who viewed the issues as moral – regardless of which side of the debate they stood on – felt less of a sense of choice when faced with the decisions. “In contrast, people who made a decision that was not imbued with morality were more likely to view it as a choice,” Smith said.

The researchers saw this weaker sense of choice play out in the participants’ attention patterns. When deciding among morally relevant options displayed on a computer screen, they devoted less visual attention to the option that they ultimately rejected, suggesting they were less likely to even consider immoral options as viable alternatives in their decision-making, the study said.

Moreover, participants who felt they had fewer options tended to choose more variety later on. After deciding among moral options, the participants tended to opt for more variety when given the choice of seven different types of chocolate in an unrelated task. “It’s a very subtle effect but it’s indicative that people are trying to reassert their sense of autonomy,” Smith said.

Understanding the way that people make morally relevant decisions has implications for business ethics, he said: “If we can figure out what influences people to behave ethically or not, we can better empower managers with tools that might help them reduce unethical behavior in the workplace.”

The info is here.

The original research is here.

Wednesday, December 12, 2018

Social relationships more important than hard evidence in partisan politics

phys.org
Dartmouth College
Originally posted November 13, 2018

Here is an excerpt:

Three factors drive the formation of social and political groups according to the research: social pressure to have stronger opinions, the relationship of an individual's opinions to those of their social neighbors, and the benefits of having social connections.

A key idea studied in the paper is that people choose their opinions and their connections to avoid differences of opinion with their social neighbors. By joining like-minded groups, individuals also prevent the psychological stress, or "cognitive dissonance," of considering opinions that do not match their own.

"Human social tendencies are what form the foundation of that political behavior," said Tucker Evans, a senior at Dartmouth who led the study. "Ultimately, strong relationships can have more value than hard evidence, even for things that some would take as proven fact."

The information is here.

The original research is here.

Why Are Doctors Killing Themselves?

The Practical Professional in Healthcare
October/November 2018

Here is an excerpt:

The nation loses 300 to 400 physicians each year, the equivalent of two large medical school classes, and more than a million patients lose their doctor.  According to a new research study encompassing data from the past ten years, physicians are committing suicide at a rate that’s more than twice as high as the average population—higher even than for veterans.

With a critical shortage of physicians looming and advocates like Pamela Wible calling attention to the problem, the increasingly urgent question remains: Why are doctors killing themselves? And what can be done to help?  In response, researchers are ramping up their efforts to understand the causes of
physician suicide; leading hospitals, medical schools and professional organizations are pioneering new programs and interventions; and regulators are reconsidering how they might revise the licensing/renewal process to support their efforts.

The info is here.

There are several other articles on physician self-care, which applies to other helping professions.

Tuesday, December 11, 2018

Beyond the Boundaries: Ethical Issues in the Practice of Indirect Personality Assessment in Non-Health-Service Psychology

Marvin W. Acklin
Journal of Personality Assessment
https://doi.org/10.1080/00223891.2018.1522639

Abstract

This article focuses on ethical quandaries in the practice of indirect personality assessment in non-health-service psychology. Indirect personality assessment methods do not involve face-to-face interaction. Personality assessment at a distance is a methodological development of personality and social psychology, psychobiography, and psychohistory. Indirect personality methods are used in clinical, forensic, law enforcement, public safety, and national security settings. Psychology practice in non-health-service settings creates tensions between principles of beneficence and duty to society. This article defines methods of indirect personality assessment and some ethical ramifications. Their application in non-health-service settings occurs in the context of intense controversy over the ethics of psychologists’ participation in work settings where there are third-party loyalties, absence of voluntary informed consent, presence of nonstipulated harms, and absence of legal and ethical accountability. A hypothetical case example illustrates typical quandaries encountered in a national security assessment. This article provides a framework for critically examining ethical quandaries, a contemporary conceptual and process model for integrative moral cognition, and parameters for ethical reasoning by the individual practitioner under the exigencies of real-world practice.

Is It Ethical to Use Prognostic Estimates from Machine Learning to Treat Psychosis?

Nicole Martinez-Martin, Laura B. Dunn, and Laura Weiss Roberts
AMA J Ethics. 2018;20(9):E804-811.
doi: 10.1001/amajethics.2018.804.

Abstract

Machine learning is a method for predicting clinically relevant variables, such as opportunities for early intervention, potential treatment response, prognosis, and health outcomes. This commentary examines the following ethical questions about machine learning in a case of a patient with new onset psychosis: (1) When is clinical innovation ethically acceptable? (2) How should clinicians communicate with patients about the ethical issues raised by a machine learning predictive model?

(cut)

Conclusion

In order to implement the predictive tool in an ethical manner, Dr K will need to carefully consider how to give appropriate information—in an understandable manner—to patients and families regarding use of the predictive model. In order to maximize benefits from the predictive model and minimize risks, Dr K and the institution as a whole will need to formulate ethically appropriate procedures and protocols surrounding the instrument. For example, implementation of the predictive tool should consider the ability of a physician to override the predictive model in support of ethically or clinically important variables or values, such as beneficence. Such measures could help realize the clinical application potential of machine learning tools, such as this psychosis prediction model, to improve the lives of patients.

Monday, December 10, 2018

What makes a ‘good’ clinical ethicist?

Trevor Bibler
Baylor College of Medicine Blog
Originally posted October 12, 2018

Here is an excerpt:

Some hold that the complexity of clinical ethics consultations couldn’t be reduced to multiple-choice questions based on a few sources, arguing that creating multiple-choice questions that reflect the challenges of doing clinical ethics is nearly impossible. Most of the time, the HEC-C Program is careful to emphasize that they are testing knowledge of issues in clinical ethics, not the ethicist’s ability to apply this knowledge to the practice of clinical ethics.

This is a nuanced distinction that may be lost on those outside the field. For example, an administrator might view the HEC-C Program as separating a good ethicist from an inadequate ethicist simply because they have 400 hours of experience and can pass a multiple-choice exam.

Others disagree with the source material (called “core references”) that serves as the basis for exam questions. I believe the core references, if repetitious, are important works in the field. My concern is that these works do not pay sufficient attention to some of the most pressing and challenging issues in clinical ethics today: income inequality, care for non-citizens, drug abuse, race, religion, sex and gender, to name a few areas.

Also, it’s feasible that inadequate ethicists will become certified. I can imagine an ethicist might meet the requirements, but fall short of being a good ethicist because in practice they are poor communicators, lack empathy, are authoritarian when analyzing ethics issues, or have an off-putting presence.

On the other hand, I know some ethicists I would consider experts in the field who are not going to undergo the certification process because they disagree with it. Both of these scenarios show that HEC certification should not be the single requirement that separates a good ethicist from an inadequate ethicist.

The info is here.

Somers Point therapist charged with hiring hitman to 'permanently disfigure' victim

Lauren Carroll
The Press of Atlantic City
Originally posted November 6, 2018

A Somers Point therapist told an undercover FBI agent posing as a hitman she wanted her Massachusetts colleague’s “face bashed-in” and arm broken, according to a criminal complaint filed with the U.S Attorney’s Office.

Diane Sylvia, 58, has been charged with solicitation to commit a crime of violence and appeared in Camden federal court Monday.

According to the criminal complaint filed Friday, a person contacted the FBI to report a murder-for-hire scheme on Sept. 24.

The informant is a former member of an organization criminal gang and was in therapy with Sylvia, a licensed clinical social worker. Sylvia allegedly asked the informant to help kill a North Attleboro, Massachusetts, man, the complaint said.

Sylvia’s lawyer Michael Paulhus of Toms River could not be reached for comment. Sylvia could not be reached for comment.

According to the court documents, Sylvia targeted the man after he threatened to report her to a licensing board. She wanted the man assaulted to “make (her) feel better,” according to court documents.

The info is here.

Sunday, December 9, 2018

The Vulnerable World Hypothesis

Nick Bostrom
Working Paper (2018)

Abstract

Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the “semi-anarchic default condition”. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology. A general ability to stabilize a vulnerable world would require greatly amplified capacities for preventive policing and global governance. The vulnerable world hypothesis thus offers a new perspective from which to evaluate the risk-benefit balance of developments towards ubiquitous surveillance or a unipolar world order.

The working paper is here.

Vulnerable World Hypothesis: If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization
sufficiently exits the semi-anarchic default condition.

Saturday, December 8, 2018

Psychological health profiles of Canadian psychotherapists: A wake up call on psychotherapists’ mental health

Laverdière, O., Kealy, D., Ogrodniczuk, J. S., & Morin, A. J. S.
(2018) Canadian Psychology/Psychologie canadienne, 59(4), 315-322.
http://dx.doi.org/10.1037/cap0000159

Abstract

The mental health of psychotherapists represents a key determinant of their ability to deliver optimal psychological services. However, this important topic is seldom the focus of empirical investigations. The objectives of the current study were twofold. First, the study aimed to assess subjective ratings of mental health in a broad sample of Canadian psychotherapists. Second, this study aimed to identify profiles of psychotherapists according to their scores on a series of mental health indicators. A total of 240 psychotherapists participated in the survey. Results indicated that 20% of psychotherapists were emotionally exhausted and 10% were in a state of significant psychological distress. Latent profile analyses revealed 4 profiles of psychotherapists that differed on their level of mental health: highly symptomatic (12%), at risk (35%), well adapted (40%), and high functioning (12%). Characteristics of the profiles are discussed, as well as potential implications of our findings for practice, trainee selection, and future research on psychotherapists’ mental health.

Here is part of the Discussion:

Considering that 12% of the psychotherapists were highly symptomatic and that an additional 35% could be considered at risk for significant mental health problems, the present findings raise troubling questions. Were these psychotherapists adequately prepared to help clients? From the perspective of attachment theory, the psychotherapist functions as an attachment figure for the client (Mallinckrodt, 2010); clients require their psychotherapists to provide a secure attachment base that allows for the exploration of negative thoughts and feelings, as well as for the alleviation of distress (Slade, 2016). A psychotherapist who is preoccupied with his or her own personal distress may find it very difficult to play this role efficiently and may at least implicitly bring some maladaptive features to the clinical encounter, thus depriving the client of the possibility of experiencing a secure attachment in the context of the therapeutic relationship. Moreover, regardless of the potential attachment implications, clients prefer experiencing a secure relationship with an emotionally responsive psychotherapist (Swift & Callahan, 2010). More precisely, Swift and Callahan (2010) found that clients were, to some extent, willing to forego empirically supported interventions in favour of a satisfactory relationship with the therapist, empathy from the therapist, and greater level of therapist experience. The present results cast a reasonable doubt on the ability of extenuated psychotherapists, and more so psychologically ill therapists, to present themselves in a positive light to the client in order to build strong therapeutic relationships with them.

Friday, December 7, 2018

Lay beliefs about the controllability of everyday mental states.

Cusimano, C., & Goodwin, G.
In press, Journal of Experimental Psychology: General

Abstract

Prominent accounts of folk theory of mind posit that people judge others’ mental states to be uncontrollable, unintentional, or otherwise involuntary. Yet, this claim has little empirical support: few studies have investigated lay judgments about mental state control, and those that have done so yield conflicting conclusions. We address this shortcoming across six studies, which show that, in fact, lay people attribute to others a high degree of intentional control over their mental states, including their emotions, desires, beliefs, and evaluative attitudes. For prototypical mental states, people’s judgments of control systematically varied by mental state category (e.g., emotions were seen as less controllable than desires, which in turn were seen as less controllable than beliefs and evaluative attitudes). However, these differences were attenuated, sometimes completely, when the content of and context for each mental state were tightly controlled. Finally, judgments of control over mental states correlated positively with judgments of responsibility and blame for them, and to a lesser extent, with judgments that the mental state reveals the agent’s character. These findings replicated across multiple populations and methods, and generalized to people’s real-world experiences. The present results challenge the view that people judge others’ mental states as passive, involuntary, or unintentional, and suggest that mental state control judgments play a key role in other important areas of social judgment and decision making.

The research is here.

Important research for those practicing psychotherapy.

Neuroexistentialism: A New Search for Meaning

Owen Flanagan and Gregg D. Caruso
The Philosopher's Magazine
Originally published November 6, 2018

Existentialisms are responses to recognisable diminishments in the self-image of persons caused by social or political rearrangements or ruptures, and they typically involve two steps: (a) admission of the anxiety and an analysis of its causes, and (b) some sort of attempt to regain a positive, less anguished, more hopeful image of persons. With regard to the first step, existentialisms typically involve a philosophical expression of the anxiety that there are no deep, satisfying answers that make sense of the human predicament and explain what makes human life meaningful, and thus that there are no secure foundations for meaning, morals, and purpose. There are three kinds of existentialisms that respond to three different kinds of grounding projects – grounding in God’s nature, in a shared vision of the collective good, or in science. The first-wave existentialism of Kierkegaard, Dostoevsky, and Nietzsche expressed anxiety about the idea that meaning and morals are made secure because of God’s omniscience and good will. The second-wave existentialism of Sartre, Camus, and de Beauvoir was a post-Holocaust response to the idea that some uplifting secular vision of the common good might serve as a foundation. Today, there is a third-wave existentialism, neuroexistentialism, which expresses the anxiety that, even as science yields the truth about human nature, it also disenchants.

Unlike the previous two waves of existentialism, neuroexistentialism is not caused by a problem with ecclesiastical authority, nor by the shock of coming face to face with the moral horror of nation state actors and their citizens. Rather, neuroexistentialism is caused by the rise of the scientific authority of the human sciences and a resultant clash between the scientific and humanistic image of persons. Neuroexistentialism is a twenty-first-century anxiety over the way contemporary neuroscience helps secure in a particularly vivid way the message of Darwin from 150 years ago: that humans are animals – not half animal, not some percentage animal, not just above the animals, but 100 percent animal. Everyday and in every way, neuroscience removes the last vestiges of an immaterial soul or self. It has no need for such posits. It also suggest that the mind is the brain and all mental processes just are (or are realised in) neural processes, that introspection is a poor instrument for revealing how the mind works, that there is no ghost in the machine or Cartesian theatre where consciousness comes together, that death is the end since when the brain ceases to function so too does consciousness, and that our sense of self may in part be an illusion.

The info is here.

Thursday, December 6, 2018

Partisanship, Political Knowledge, and the Dunning‐Kruger Effect

Ian G. Anson
Political Psychology
First published: 02 April 2018
https://doi.org/10.1111/pops.12490

Abstract

A widely cited finding in social psychology holds that individuals with low levels of competence will judge themselves to be higher achieving than they really are. In the present study, I examine how the so‐called “Dunning‐Kruger effect” conditions citizens' perceptions of political knowledgeability. While low performers on a political knowledge task are expected to engage in overconfident self‐placement and self‐assessment when reflecting on their performance, I also expect the increased salience of partisan identities to exacerbate this phenomenon due to the effects of directional motivated reasoning. Survey experimental results confirm the Dunning‐Kruger effect in the realm of political knowledge. They also show that individuals with moderately low political expertise rate themselves as increasingly politically knowledgeable when partisan identities are made salient. This below‐average group is also likely to rely on partisan source cues to evaluate the political knowledge of peers. In a concluding section, I comment on the meaning of these findings for contemporary debates about rational ignorance, motivated reasoning, and political polarization.

Survey Finds Widespread 'Moral Distress' Among Veterinarians

Carey Goldberg
NPR.org
Originally posted October 17, 2018

In some ways, it can be harder to be a doctor of animals than a doctor of humans.

"We are in the really unenviable, and really difficult, position of caring for patients maybe for their entire lives, developing our own relationships with those animals — and then being asked to kill them," says Dr. Lisa Moses, a veterinarian at the Massachusetts Society for the Prevention of Cruelty to Animals-Angell Animal Medical Center and a bioethicist at Harvard Medical School.

She's the lead author of a study published Monday in the Journal of Veterinary Internal Medicine about "moral distress" among veterinarians. The survey of more than 800 vets found that most feel ethical qualms — at least sometimes — about what pet owners ask them to do. And that takes a toll on their mental health.

Dr. Virginia Sinnott-Stutzman is all too familiar with the results. As a senior staff veterinarian in emergency and critical care at Angell, she sees a lot of very sick animals — and quite a few decisions by owners that trouble her.

Sometimes, owners elect to have their pets put to sleep because they can't or won't pay for treatment, she says. Or the opposite, "where we know in our heart of hearts that there is no hope to save the animal, or that the animal is suffering and the owners have a set of beliefs that make them want to keep going."

The info is here.

Wednesday, December 5, 2018

Toward a psychology of Homo sapiens: Making psychological science more representative of the human population

Mostafa Salari Rad, Alison Jane Martingano, and Jeremy Ginges
PNAS November 6, 2018 115 (45) 11401-11405; published ahead of print November 6, 2018 https://doi.org/10.1073/pnas.1721165115

Abstract

Two primary goals of psychological science should be to understand what aspects of human psychology are universal and the way that context and culture produce variability. This requires that we take into account the importance of culture and context in the way that we write our papers and in the types of populations that we sample. However, most research published in our leading journals has relied on sampling WEIRD (Western, educated, industrialized, rich, and democratic) populations. One might expect that our scholarly work and editorial choices would by now reflect the knowledge that Western populations may not be representative of humans generally with respect to any given psychological phenomenon. However, as we show here, almost all research published by one of our leading journals, Psychological Science, relies on Western samples and uses these data in an unreflective way to make inferences about humans in general. To take us forward, we offer a set of concrete proposals for authors, journal editors, and reviewers that may lead to a psychological science that is more representative of the human condition.

Georgia Tech has had a ‘dramatic increase’ in ethics complaints, president says

Eric Stirgus
The Atlantic Journal-Constitution
Originally published November 6, 2018

Here is an excerpt:

The Atlanta Journal-Constitution reported in September Georgia Tech is often slow in completing ethics investigations. Georgia Tech took an average of 102 days last year to investigate a complaint, the second-longest time of any college or university in the University System of Georgia, according to a report presented in April to the state’s Board of Regents. Savannah State University had the longest average time, 135 days.

Tuesday’s meeting is the kick-off to more than a week’s worth of discussions at Tech to improve its ethics culture. University System of Georgia Chancellor Steve Wrigley ordered Georgia Tech to update him on what officials there are doing to improve after reports found problems such as a top official who was a paid board member of a German-based company that had contracts with Tech. Peterson’s next update is due Monday.

A few employees told Peterson they’re concerned that many administrators are now afraid to make decisions and asked the president what’s being done to address that. Peterson acknowledged “there’s some anxiety on campus” and asked employees to “embrace each other” as they work through what he described as an embarrassing chapter in the school’s history.

The info is here.

Tuesday, December 4, 2018

Letting tech firms frame the AI ethics debate is a mistake

Robert Hart
www.fastcompany.com
Originally posted November 2, 2018

Here is an excerpt:

Even many ethics-focused panel discussions–or manel discussions, as some call them–are pale, male, and stale. That is to say, they are made up predominantly of old, white, straight, and wealthy men. Yet these discussions are meant to be guiding lights for AI technologies that affect everyone.

A historical illustration is useful here. Consider polio, a disease that was declared global public health enemy number one after the successful eradication of smallpox decades ago. The “global” part is important. Although the movement to eradicate polio was launched by the World Health Assembly, the decision-making body of the United Nations’ World Health Organization, the eradication campaign was spearheaded primarily by groups in the U.S. and similarly wealthy countries. Promulgated with intense international pressure, the campaign distorted local health priorities in many parts of the developing world.

It’s not that the developing countries wanted their citizens to contract polio. Of course, they didn’t. It’s just that they would have rather spent the significant sums of money on more pressing local problems. In essence, one wealthy country imposed their own moral judgement on the rest of the world, with little forethought about the potential unintended consequences. The voices of a few in the West grew to dominate and overpower those elsewhere–a kind of ethical colonialism, if you will.

The info is here.

Document ‘informed refusal’ just as you would informed consent

James Scibilia
AAP News
Originally posted October 20, 2018

Here is an excerpt:

The requirements of informed refusal are the same as informed consent. Providers must explain:

  • the proposed treatment or testing;
  • the risks and benefits of refusal;
  • anticipated outcome with and without treatment; and
  • alternative therapies, if available.

Documentation of this discussion, including all four components, in the medical record is critical to mounting a successful defense from a claim that you failed to warn about the consequences of refusing care.

Since state laws vary, it is good practice to check with your malpractice carrier about preferred risk management documentation. Generally, the facts of these discussions should be included and signed by the caretaker. This conversation and documentation should not be delegated to other members of the health care team. At least one state has affirmed through a Supreme Court decision that informed consent must be obtained by the provider performing the procedure and not another team member; it is likely the concept of informed refusal would bear the same requirements.

The info is here.

Monday, December 3, 2018

Our lack of interest in data ethics will come back to haunt us

Jayson Demers
thenextweb.com
Originally posted November 4, 2018

Here is an excerpt:

Most people understand the privacy concerns that can arise with collecting and harnessing big data, but the ethical concerns run far deeper than that.

These are just a smattering of the ethical problems in big data:

  • Ownership: Who really “owns” your personal data, like what your political preferences are, or which types of products you’ve bought in the past? Is it you? Or is it public information? What about people under the age of 18? What about information you’ve tried to privatize?
  • Bias: Biases in algorithms can have potentially destructive effects. Everything from facial recognition to chatbots can be skewed to favor one demographic over another, or one set of values over another, based on the data used to power it.
  • Transparency: Are companies required to disclose how they collect and use data? Or are they free to hide some of their efforts? More importantly, who gets to decide the answer here?
  • Consent: What does it take to “consent” to having your data harvested? Is your passive use of a platform enough? What about agreeing to multi-page, complexly worded Terms and Conditions documents?

If you haven’t heard about these or haven’t thought much about them, I can’t really blame you. We aren’t bringing these questions to the forefront of the public discussion on big data, nor are big data companies going out of their way to discuss them before issuing new solutions.

“Oops, our bad”

One of the biggest problems we keep running into is what I call the “oops, our bad” effect. The idea here is that big companies and data scientists use and abuse data however they want, outside the public eye and without having an ethical discussion about their practices. If and when the public finds out that some egregious activity took place, there’s usually a short-lived public outcry, and the company issues an apology for the actions — without really changing their practices or making up for the damage.

The info is here.

Choosing victims: Human fungibility in moral decision-making

Michal Bialek, Jonathan Fugelsang, and Ori Friedman
Judgment and Decision Making, Vol 13, No. 5, pp. 451-457.

Abstract

In considering moral dilemmas, people often judge the acceptability of exchanging individuals’ interests, rights, and even lives. Here we investigate the related, but often overlooked, question of how people decide who to sacrifice in a moral dilemma. In three experiments (total N = 558), we provide evidence that these decisions often depend on the feeling that certain people are fungible and interchangeable with one another, and that one factor that leads people to be viewed this way is shared nationality. In Experiments 1 and 2, participants read vignettes in which three individuals’ lives could be saved by sacrificing another person. When the individuals were characterized by their nationalities, participants chose to save the three endangered people by sacrificing someone who shared their nationality, rather than sacrificing someone from a different nationality. Participants do not show similar preferences, though, when individuals were characterized by their age or month of birth. In Experiment 3, we replicated the effect of nationality on participant’s decisions about who to sacrifice, and also found that they did not show a comparable preference in a closely matched vignette in which lives were not exchanged. This suggests that the effect of nationality on decisions of who to sacrifice may be specific to judgments about exchanges of lives.

The research is here.

Sunday, December 2, 2018

Re-thinking Data Protection Law in the Age of Big Data and AI

Sandra Wachter and Brent Mittelstadt
Oxford Internet Institute
Originally posted October 11,2 018

Numerous applications of ‘Big Data analytics’ drawing potentially troubling inferences about individuals and groups have emerged in recent years.  Major internet platforms are behind many of the highest profile examples: Facebook may be able to infer protected attributes such as sexual orientation, race, as well as political opinions and imminent suicide attempts, while third parties have used Facebook data to decide on the eligibility for loans and infer political stances on abortion. Susceptibility to depression can similarly be inferred via usage data from Facebook and Twitter. Google has attempted to predict flu outbreaks as well as other diseases and their outcomes. Microsoft can likewise predict Parkinson’s disease and Alzheimer’s disease from search engine interactions. Other recent invasive applications include prediction of pregnancy by Target, assessment of users’ satisfaction based on mouse tracking, and China’s far reaching Social Credit Scoring system.

Inferences in the form of assumptions or predictions about future behaviour are often privacy-invasive, sometimes counterintuitive and, in any case, cannot be verified at the time of decision-making. While we are often unable to predict, understand or refute these inferences, they nonetheless impact on our private lives, identity, reputation, and self-determination.

These facts suggest that the greatest risks of Big Data analytics do not stem solely from how input data (name, age, email address) is used. Rather, it is the inferences that are drawn about us from the collected data, which determine how we, as data subjects, are being viewed and evaluated by third parties, that pose the greatest risk. It follows that protections designed to provide oversight and control over how data is collected and processed are not enough; rather, individuals require meaningful protection against not only the inputs, but the outputs of data processing.

Unfortunately, European data protection law and jurisprudence currently fails in this regard.

The info is here.

Saturday, December 1, 2018

Building trust by tearing others down: When accusing others of unethical behavior engenders trust

Jessica A. Kennedy, Maurice E. Schweitzer.
Organizational Behavior and Human Decision Processes
Volume 149, November 2018, Pages 111-128

Abstract

We demonstrate that accusations harm trust in targets, but boost trust in the accuser when the accusation signals that the accuser has high integrity. Compared to individuals who did not accuse targets of engaging in unethical behavior, accusers engendered greater trust when observers perceived the accusation to be motivated by a desire to defend moral norms, rather than by a desire to advance ulterior motives. We also found that the accuser’s moral hypocrisy, the accusation's revealed veracity, and the target’s intentions when committing the unethical act moderate the trust benefits conferred to accusers. Taken together, we find that accusations have important interpersonal consequences.

Highlights

•    Accusing others of unethical behavior can engender greater trust in an accuser.
•    Accusations can elevate trust by boosting perceptions of accusers’ integrity.
•    Accusations fail to build trust when they are perceived to reflect ulterior motives.
•    Morally hypocritical accusers and false accusations fail to build trust.
•    Accusations harm trust in the target.

The research is here.