Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, November 19, 2019

Medical board declines to act against fertility doctor who inseminated woman with his own sperm

Image result for dr. mcmorries texas
Dr. McMorries
Marie Saavedra and Mark Smith
Originally posted Oct 28, 2019

The Texas Medical Board has declined to act against a fertility doctor who inseminated a woman with his own sperm rather than from a donor the mother selected.

Though Texas lawmakers have now made such an act illegal, the Texas Medical Board found the actions did not “fall below the acceptable standard of care,” and declined further review, according to a response to a complaint obtained by WFAA.

In a follow-up email, a spokesperson told WFAA the board was hamstrung because it can't review complaints for instances that happened seven years or more past the medical treatment. 

The complaint was filed on behalf of 32-year-old Eve Wiley, of Dallas, who only recently learned her biological father wasn't the sperm donor selected by her mother. Instead, Wiley discovered her biological father was her mother’s fertility doctor in Nacogdoches.

Now 65, Wiley's mother, Margo Williams, had sought help from Dr. Kim McMorries because her husband was infertile.

The info is here.

Moral Responsibility

Talbert, Matthew
The Stanford Encyclopedia of Philosophy 
(Winter 2019 Edition), Edward N. Zalta (ed.)

Making judgments about whether a person is morally responsible for her behavior, and holding others and ourselves responsible for actions and the consequences of actions, is a fundamental and familiar part of our moral practices and our interpersonal relationships.

The judgment that a person is morally responsible for her behavior involves—at least to a first approximation—attributing certain powers and capacities to that person, and viewing her behavior as arising (in the right way) from the fact that the person has, and has exercised, these powers and capacities. Whatever the correct account of the powers and capacities at issue (and canvassing different accounts is the task of this entry), their possession qualifies an agent as morally responsible in a general sense: that is, as one who may be morally responsible for particular exercises of agency. Normal adult human beings may possess the powers and capacities in question, and non-human animals, very young children, and those suffering from severe developmental disabilities or dementia (to give a few examples) are generally taken to lack them.

To hold someone responsible involves—again, to a first approximation—responding to that person in ways that are made appropriate by the judgment that she is morally responsible. These responses often constitute instances of moral praise or moral blame (though there may be reason to allow for morally responsible behavior that is neither praiseworthy nor blameworthy: see McKenna 2012: 16–17 and M. Zimmerman 1988: 61–62). Blame is a response that may follow on the judgment that a person is morally responsible for behavior that is wrong or bad, and praise is a response that may follow on the judgment that a person is morally responsible for behavior that is right or good.

The information is here.

Monday, November 18, 2019

Suicide Has Been Deadlier Than Combat for the Military

Carol Giacomo
The New York Times
Originally published November 1, 2019

Here are two excerpts:

The data for veterans is also alarming.

In 2016, veterans were one and a half times more likely to kill themselves than people who hadn’t served in the military, according to the House Committee on Oversight and Reform.

Among those ages 18 to 34, the rate went up nearly 80 percent from 2005 to 2016.

The risk nearly doubles in the first year after a veteran leaves active duty, experts say.

The Pentagon this year also reported on military families, estimating that in 2017 there were 186 suicide deaths among military spouses and dependents.


Experts say suicides are complex, resulting from many factors, notably impulsive decisions with little warning. Pentagon officials say a majority of service members who die by suicide do not have mental illness. While combat is undoubtedly high stress, there are conflicting views on whether deployments increase risk.

Where there seems to be consensus is that high-quality health care and keeping weapons out of the hands of people in distress can make a positive difference.

Studies show that the Department of Veterans Affairs provides high-quality care, and its Veterans Crisis Line “surpasses most crisis lines” operating today, according to Terri Tanielian, a researcher with the RAND Corporation. (The Veterans Crisis Line is staffed 24/7 at 800-273-8255, press 1. Services also are available online or by texting 838255.)

But Veterans Affairs often can’t accommodate all those needing help, resulting in patients being sent to community-based mental health professionals who lack the training to deal with service members.

The info is here.

Understanding behavioral ethics can strengthen your compliance program

Jeffrey Kaplan
The FCPA Blog
Originally posted October 21, 2019

Behavioral ethics is a well-known field of social science which shows how — due to various cognitive biases — “we are not as ethical as we think.” Behavioral compliance and ethics (which is less well known) attempts to use behavioral ethics insights to develop and maintain effective compliance programs. In this post I explore some of the ways that this can be done.

Behavioral C&E should be viewed on two levels. The first could be called specific behavioral C&E lessons, meaning enhancements to the various discrete C&E program elements — e.g., risk assessment, training — based on behavioral ethics insights.   Several of these are discussed below.

The second — and more general — aspect of behavioral C&E is the above-mentioned overarching finding that we are not as ethical as we think. The importance of this general lesson is based on the notion that the greatest challenges to having effective C&E programs in organizations is often more about the “will” than the “way.”

That is, what is lacking in many business organizations is an understanding that strong C&E is truly necessary. After all, if we are as ethical than we think, then effective risk mitigation would be just a matter of finding the right punishment for an offense and the power of logical thinking would do the rest. Behavioral ethics teaches that that assumption is ill-founded.

The info is here.

Sunday, November 17, 2019

The Psychology of Existential Risk: Moral Judgments about Human Extinction

Stefan Schubert, Lucius Caviola & Nadira S. Faber
Scientific Reports volume 9, Article number: 15100 (2019)


The 21st century will likely see growing risks of human extinction, but currently, relatively small resources are invested in reducing such existential risks. Using three samples (UK general public, US general public, and UK students; total N = 2,507), we study how laypeople reason about human extinction. We find that people think that human extinction needs to be prevented. Strikingly, however, they do not think that an extinction catastrophe would be uniquely bad relative to near-extinction catastrophes, which allow for recovery. More people find extinction uniquely bad when (a) asked to consider the extinction of an animal species rather than humans, (b) asked to consider a case where human extinction is associated with less direct harm, and (c) they are explicitly prompted to consider long-term consequences of the catastrophes. We conclude that an important reason why people do not find extinction uniquely bad is that they focus on the immediate death and suffering that the catastrophes cause for fellow humans, rather than on the long-term consequences. Finally, we find that (d) laypeople—in line with prominent philosophical arguments—think that the quality of the future is relevant: they do find extinction uniquely bad when this means forgoing a utopian future.


Our studies show that people find that human extinction is bad, and that it is important to prevent it. However, when presented with a scenario involving no catastrophe, a near-extinction catastrophe and an extinction catastrophe as possible outcomes, they do not see human extinction as uniquely bad compared with non-extinction. We find that this is partly because people feel strongly for the victims of the catastrophes, and therefore focus on the immediate consequences of the catastrophes. The immediate consequences of near-extinction are not that different from those of extinction, so this naturally leads them to find near-extinction almost as bad as extinction. Another reason is that they neglect the long-term consequences of the outcomes. Lastly, their empirical beliefs about the quality of the future make a difference: telling them that the future will be extraordinarily good makes more people find extinction uniquely bad.

The research is here.

Saturday, November 16, 2019

Moral grandstanding in public discourse: Status-seeking motives as a potential explanatory mechanism in predicting conflict

Grubbs JB, Warmke B, Tosi J, James AS, Campbell WK
(2019) PLoS ONE 14(10): e0223749.


Public discourse is often caustic and conflict-filled. This trend seems to be particularly evident when the content of such discourse is around moral issues (broadly defined) and when the discourse occurs on social media. Several explanatory mechanisms for such conflict have been explored in recent psychological and social-science literatures. The present work sought to examine a potentially novel explanatory mechanism defined in philosophical literature: Moral Grandstanding. According to philosophical accounts, Moral Grandstanding is the use of moral talk to seek social status. For the present work, we conducted six studies, using two undergraduate samples (Study 1, N = 361; Study 2, N = 356); a sample matched to U.S. norms for age, gender, race, income, Census region (Study 3, N = 1,063); a YouGov sample matched to U.S. demographic norms (Study 4, N = 2,000); and a brief, one-month longitudinal study of Mechanical Turk workers in the U.S. (Study 5, Baseline N = 499, follow-up n = 296), and a large, one-week YouGov sample matched to U.S. demographic norms (Baseline N = 2,519, follow-up n = 1,776). Across studies, we found initial support for the validity of Moral Grandstanding as a construct. Specifically, moral grandstanding motivation was associated with status-seeking personality traits, as well as greater political and moral conflict in daily life.


Public discourse regarding morally charged topics is prone to conflict and polarization, particularly on social media platforms that tend to facilitate ideological echo chambers. The present study introduces an interdisciplinary construct called Moral Grandstanding as possible a contributing factor to this phenomenon. MG links various domains of psychology with moral philosophy to describe the use of public moral speech to enhance one’s status or image in the eyes of others. Within the present work, we focused on the motivation to engage in MG. Specifically, MG Motivation is framed as an expression of status-seeking drives in the domain of public discourse. Self-reported motivations underlying grandstanding behaviors seem to be consistent with the construct of status-seeking more broadly, seeming to represent prestige and dominance striving, both of which were found to be associated with greater interpersonal conflict and polarization. These results were consistently replicated in samples of U.S. undergraduates and nationally representative cross-sectional of U.S. residents, and longitudinal studies of adults in the U.S. Collectively, these results suggest that MG Motivation is a useful psychological phenomenon that has potential to aid our understanding of the intraindividual mechanisms driving caustic public discourse.

Friday, November 15, 2019

Gartner Fellow discusses ethics in artificial intelligence

Teena Maddox
Originally published October 28, 2019

Here is an excerpt:

There are tons of ways you can use AI ethically and also unethically. One way that is typically being cited is, for instance, using attributes of people that shouldn't be used. For instance, when granting somebody a mortgage or access to something or making other decisions. Racial profiling is typically mentioned as an example. So, you need to be mindful which attributes are being used for making decisions. How do the algorithms learn? Other ways of abuse of AI is, for instance, with autonomous killer drones. Would we allow algorithms to decide who gets bombed by a drone and who does not? And most people seem to agree that autonomous killer drones are not a very good idea.

The most important thing that a developer can do in order to create ethical AI is to not think of this as technology, but an exercise in self-reflection. Developers have certain biases. They have certain characteristics themselves. For instance, developers are keen to search for the optimal solution of a problem; it is built into their brains. But ethics is a very pluralistic thing. There's different people who have different ideas. There is not one optimal answer of what is good and bad. First and foremost, developers should be aware of their own ethical biases of what they think is good and bad and create an environment of diversity where they test those assumptions and where they test their results. The the developer brain isn't the only brain or type of brain that is out there, to say the least.

So, AI and ethics is really a story of hope. It's for the very first time that a discussion of ethics is taking place before the widespread implementation, unlike in previous rounds where the ethical considerations were taking place after the effects.

The info is here.

Is Moral Relativism Really a Problem?

Is Moral Relativism Really a Problem?Thomas Polzler
Scientific American Blog
Originally published October 16, 2019

Here is an excerpt:

Warnings against moral relativism are most often based on theoretical speculation. Critics consider the view’s nature and add certain assumptions about human psychology. Then they infer how being a relativist might affect a person’s behavior. For example, for a relativist, even actions such as murder or rape can never be really or absolutely wrong; they are only wrong to the extent that the relativist or most members of his or her culture believe them to be so.

One may therefore worry that relativists are less motivated to refrain from murdering and raping than people who regard these actions as objectively wrong. While this scenario may sound plausible, however, it is important to note that relativism’s effects can only ultimately be determined by relevant studies.

So far, scientific investigations do not support the suspicion that moral relativism is problematic. True, there are two studies that do suggest such a conclusion. In one of them, participants were led to think about morality in either relativist or objectivist terms. It turned out that subjects in the relativist condition were more likely to cheat in a lottery and to state that they would be willing to steal than those in the objectivist condition. In the other study, participants who had been exposed to relativist ideas were less likely to donate to charity than those who had been exposed to objectivist ones.

That said, there is also evidence that associates moral relativism with positive behaviors. In one of her earlier studies, Wright and her colleagues informed their participants that another person disagreed with one of their moral judgments. Then the researchers measured the subjects’ degree of tolerance for this person’s divergent moral view. For example, participants were asked how willing they would be to interact with the person, how willing they would be to help him or her and how comfortable they generally were with another individual denying one of their moral judgments. It turned out that subjects with relativist leanings were more tolerant toward the disagreeing person than those who had tended toward objectivism.

The info is here.

Thursday, November 14, 2019

Assessing risk, automating racism

Embedded ImageRuha Benjamin
Science  25 Oct 2019:
Vol. 366, Issue 6464, pp. 421-422

Here is an excerpt:

Practically speaking, their finding means that if two people have the same risk score that indicates they do not need to be enrolled in a “high-risk management program,” the health of the Black patient is likely much worse than that of their White counterpart. According to Obermeyer et al., if the predictive tool were recalibrated to actual needs on the basis of the number and severity of active chronic illnesses, then twice as many Black patients would be identified for intervention. Notably, the researchers went well beyond the algorithm developers by constructing a more fine-grained measure of health outcomes, by extracting and cleaning data from electronic health records to determine the severity, not just the number, of conditions. Crucially, they found that so long as the tool remains effective at predicting costs, the outputs will continue to be racially biased by design, even as they may not explicitly attempt to take race into account. For this reason, Obermeyer et al. engage the literature on “problem formulation,” which illustrates that depending on how one defines the problem to be solved—whether to lower health care costs or to increase access to care—the outcomes will vary considerably.