Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, November 6, 2019

How to operationalize AI ethics

Khari Johnson
venturebeat.com
Originally published October 7, 2019

Here is an excerpt:

Tools, frameworks, and novel approaches

One of Thomas’ favorite AI ethics resources comes from the Markkula Center for Applied Ethics at Santa Clara University: a toolkit that recommends a number of processes to implement.

“The key one is ethical risk sweeps, periodically scheduling times to really go through what could go wrong and what are the ethical risks. Because I think a big part of ethics is thinking through what can go wrong before it does and having processes in place around what happens when there are mistakes or errors,” she said.

To root out bias, Sobhani recommends the What-If visualization tool from Google’s People + AI Research (PAIR) initiative as well as FairTest, a tool for “discovering unwarranted association within data-driven applications” from academic institutions like EPFL and Columbia University. She also endorses privacy-preserving AI techniques like federated learning to ensure better user privacy.

In addition to resources recommended by panelists, Algorithm Watch maintains a running list of AI ethics guidelines. Last week, the group found that guidelines released in March 2018 by IEEE, the world’s largest association for professional engineers, have seen little adoption at Facebook and Google.

The info is here.

Tuesday, November 5, 2019

Moral Enhancement: A Realistic Approach

Greg Conan
British Medical Journal Blogs
Originally published August 29, 2019

Here is an excerpt:

If you could take a pill to make yourself a better person, would you do it? Could you justifiably make someone else do it, even if they do not want to?

When presented so simplistically, the idea might seem unrealistic or even impossible. The concepts of “taking a pill” and “becoming a better person” seem to belong to different categories. But many of the traits commonly considered to make one a “good person”—such as treating others fairly and kindly without violence—are psychological traits strongly influenced by neurobiology, and neurobiology c
an be changed using medicine. So when and how, if ever, should medicine be used to improve moral character?

Moral bioenhancement (MBE), the concept of improving moral character using biomedical technology, has fascinated me for years—especially once I learned that it has been hotly debated in the bioethics literature since 2008. I have greatly enjoyed diving into the literature to learn about how the concept has been analyzed and presented. Much of the debate has focused on its most abstract topics, like defining its terms and relating MBE to freedom. Although my fondness for analytic philosophy means that I cannot condemn anyone for working to examine ideas with maximum clarity and specificity, any MBE proponent who actually wants MBE to be implemented must focus on realistic methods.

The info is here.

Will Robots Wake Up?

Susan Schneider
orbitermag.com
Originally published September 30, 2019

Machine consciousness, if it ever exists, may not be found in the robots that tug at our heartstrings, like R2D2. It may instead reside in some unsexy server farm in the basement of a computer science building at MIT. Or perhaps it will exist in some top-secret military program and get snuffed out, because it is too dangerous or simply too inefficient.

AI consciousness likely depends on phenomena that we cannot, at this point, gauge—such as whether some microchip yet to be invented has the right configuration, or whether AI developers or the public want conscious AI. It may even depend on something as unpredictable as the whim of a single AI designer, like Anthony Hopkins’s character in Westworld. The uncertainty we face moves me to a middle-of-the-road position, one that stops short of either techno-optimism (believing that technology can solve our problems) or biological naturalism.

This approach I call, simply, the “Wait and See Approach.”

In keeping with my desire to look at real-world considerations that speak to whether AI consciousness is even compatible with the laws of nature—and, if so, whether it is technologically feasible or even interesting to build—my discussion draws from concrete scenarios in AI research and cognitive science.

The info is here.

Monday, November 4, 2019

Ethical Algorithms: Promise, Pitfalls and a Path Forward

Image result for ethical algorithmJay Van Bavel, Tessa West, Enrico Bertini, and Julia Stoyanovich
PsyArXiv Preprints
Originally posted October 21, 2019

Abstract

Fairness in machine-assisted decision making is critical to consider, since a lack of fairness can harm individuals or groups, erode trust in institutions and systems, and reinforce structural discrimination. To avoid making ethical mistakes, or amplifying them, it is important to ensure that the algorithms we develop are fair and promote trust. We argue that the marriage of techniques from behavioral science and computer science is essential to develop algorithms that make ethical decisions and ensure the welfare of society. Specifically, we focus on the role of procedural justice, moral cognition, and social identity in promoting trust in algorithms and offer a road map for future research on the topic.

--

The increasing role of machine-learning and algorithms in decision making has revolutionized areas ranging from the media to medicine to education to industry. As the recent One Hundred Year Study on Artificial Intelligence (Stone et al., 2016) reported: “AI technologies already pervade our lives. As they become a central force in society, the field is shifting from simply building systems that are intelligent to building intelligent systems that are human-aware and trustworthy.” Therefore, the effective development and widespread adoption of algorithms will hinge not only on the sophistication of engineers and computer scientists, but also on the expertise of behavioural scientists.

These algorithms hold enormous promise for solving complex problems, increasing efficiency, reducing bias, and even making decision-making transparent. However, the last few decades of behavioral science have established that humans hold a number of biases and shortcomings that impact virtually every sphere of human life (Banaji& Greenwald, 2013) and discrimination can become entrenched, amplified, or even obscured when decisions are implemented by algorithms (Kleinberg, Ludwig, Mullainathan, & Sunstein, 2018). While there has been a growing awareness that programmers and organizations should pay greater attention to discrimination and other ethical considerations (Dignum, 2018), very little behavioral research has directly examined these issues.  In  this  paper,  we  describe  how  behavioural  science  will  play  a  critical  role  in  the development  of  ethical  algorithms  and  outline  a  roadmap  for behavioural  scientists  and computer scientists to ensure that these algorithms are as ethical as possible.

The paper is here.

Principles of karmic accounting: How our intuitive moral sense balances rights and wrongs

Image result for karmic accountingJohnson, S. G. B., & Ahn, J.
(2019, September 10).
PsyArXiv
https://doi.org/10.31234/osf.io/xetwg

Abstract

We are all saints and sinners: Some of our actions benefit other people, while other actions harm people. How do people balance moral rights against moral wrongs when evaluating others’ actions? Across 9 studies, we contrast the predictions of three conceptions of intuitive morality—outcome- based (utilitarian), act-based (deontologist), and person-based (virtue ethics) approaches. Although good acts can partly offset bad acts—consistent with utilitarianism—they do so incompletely and in a manner relatively insensitive to magnitude, but sensitive to temporal order and the match between who is helped and harmed. Inferences about personal moral character best predicted blame judgments, explaining variance across items and across participants. However, there was modest evidence for both deontological and utilitarian processes too. These findings contribute to conversations about moral psychology and person perception, and may have policy implications.

Here is the beginning of the General Discussion:

Much  of  our  behavior  is  tinged  with  shades  of  morality. How  third-parties  judge  those behaviors has numerous social consequences: People judged as behaving immorally can be socially ostracized,  less  interpersonally  attractive,  and  less  able  to  take  advantage  of  win–win  agreements. Indeed, our desire to avoid ignominy and maintain our moral reputations motivate much of our social behavior. But on the other hand, moral judgment is subject to a variety of heuristics and biases that appear  to  violate  normative  moral  theories  and  lead  to  inconsistency  (Bartels,  Bauman,  Cushman, Pizarro, & McGraw, 2015; Sunstein, 2005).  Despite the dominating influence of moral judgment in everyday social cognition, little is known about how judgments of individual acts scale up into broader judgments  about  sequences  of  actions,  such  as  moral  offsetting  (a  morally  bad  act  motivates  a subsequent morally good act) or self-licensing (a morally good act motivates a subsequent morally bad act). That is, we need a theory of karmic accounting—how rights and wrongs add up in moral judgment.

Sunday, November 3, 2019

The Sex Premium in Religiously Motivated Moral Judgment

Image result for sexual behavior moralityLiana Hone, Thomas McCauley, Eric Pedersen,
Evan Carter, and Michael McCullough
PsyArXiv Preprints

Abstract

Religion encourages people to reason about moral issues deontologically rather than on the basis of the perceived consequences of specific actions. However, recent theorizing suggests that religious people’s moral convictions are actually quite strategic (albeit unconsciously so), designed to make their worlds more amenable to their favored approaches to solving life’s basic challenges. In six experiments, we find that religious cognition places a “sex premium” on moral judgments, causing people to judge violations of conventional sexual morality as particularly objectionable. The sex premium is especially strong among highly religious people, and applies to both legal and illegal acts. Religion’s influence on moral reasoning, even if deontological, emphasizes conventional sexual norms, and may reflect the strategic projects to which religion has been applied throughout history.

From the Discussion

How does the sex premium in religiously motivated moral judgment arise during development? We see three plausible pathways. First, society’s vectors for religious cultural learning may simply devote more attention to sex and reproduction than to prosociality when they seek to influence others’ moral stances. Conservative preachers, for instance, devote more time to issues of sexual purity than do liberal preachers, and religious parents discuss the morality of sex with their children more frequently than do less religious parents, even though they discuss sex with their children less frequently overall. Second, strong emotions facilitate cultural learning by improving attention, memory, and motivation, and few human experiences generate stronger emotions than do sex and reproduction. If the emotions that regulate sexual attraction, arousal, and avoidance (e.g., sexual disgust) are stronger than those that regulate prosocial behavior (e.g., empathy; moralistic anger), then the sex premium documented here may emerge from the fact that religiously motivated sexual moralists can create more powerful cultural learning experiences than prosocial moralists can.  Finally, given the extreme importance of sex and reproduction to fitness, the children of religiously adherent adults may observe that violations of local sexual standards to evoke greater moral outrage and condemnation from third parties than do violations of local standards for prosocial behavior.

The research is here.

Saturday, November 2, 2019

Burnout in healthcare: the case for organisational change

Image result for burnoutA Montgomery, E Panagopoulou, A Esmail,
T Richards, & C Maslach
BMJ 2019; 366
doi: https://doi.org/10.1136/bmj.l4774
(Published 30 July 2019)

Burnout has become a big concern within healthcare. It is a response to prolonged exposure to occupational stressors, and it has serious consequences for healthcare professionals and the organisations in which they work. Burnout is associated with sleep deprivation, medical errors, poor quality of care, and low ratings of patient satisfaction. Yet often initiatives to tackle burnout are focused on individuals rather than taking a systems approach to the problem.

Evidence on the association of burnout with objective indicators of performance (as opposed to self report) is scarce in all occupations, including healthcare. But the few examples of studies using objective indicators of patient safety at a system level confirm the association between burnout and suboptimal care. For example, in a recent study, intensive care units in which staff had high emotional exhaustion had higher patient standardised mortality ratios, even after objective unit characteristics such as workload had been controlled for.

The link between burnout and performance in healthcare is probably underestimated: job performance can still be maintained even when burnt out staff lack mental or physical energy as they adopt “performance protection” strategies to maintain high priority clinical tasks and neglect low priority secondary tasks (such as reassuring patients). Thus, evidence that the system is broken is masked until critical points are reached. Measuring and assessing burnout within a system could act as a signal to stimulate intervention before it erodes quality of care and results in harm to patients.

Burnout does not just affect patient safety. Failing to deal with burnout results in higher staff turnover, lost revenue associated with decreased productivity, financial risk, and threats to the organisation’s long term viability because of the effects of burnout on quality of care, patient satisfaction, and safety. Given that roughly 10% of the active EU workforce is engaged in the health sector in its widest sense, the direct and indirect costs of burnout could be substantial.

The info is here.

Friday, November 1, 2019

Can a Woman Rape a Man and Why Does It Matter?

Natasha McKeever
Criminal Law and Philosophy (2019)
13:599–619
https://doi.org/10.1007/s11572-018-9485-6

Abstract

Under current UK legislation, only a man can commit rape. This paper argues that this is an unjustified double standard that reinforces problematic gendered stereotypes about male and female sexuality. I first reject three potential justifications for making penile penetration a condition of rape: (1) it is physically impossible for a woman to rape a man; (2) it is a more serious offence to forcibly penetrate someone than to force them to penetrate you; (3) rape is a gendered crime. I argue that, as these justifications fail, a woman having sex with a man without his consent ought to be considered rape. I then explain some further reasons that this matters. I argue that, not only is it unjust, it is also both a cause and a consequence of harmful stereotypes and prejudices about male and female sexuality: (1) men are ‘always up for sex’; (2) women’s sexual purity is more important than men’s; (3) sex is something men do to women. Therefore, I suggest that, if rape law were made gender neutral, these stereotypes would be undermined and this might make some (albeit small) difference to the problematic ways that sexual relations are sometimes viewed between men and women more generally.

(cut)

3 Final Thoughts on Gender and Rape

The belief that a woman cannot rape a man, therefore, might be both a cause and a consequence of these kinds of harmful gendered stereotypical beliefs:

(a) Sex is something that men do to women.
(b) This is, in part, because men have an uncontrollable desire for sex; women are less bothered about sex.
(c) Due to men’s uncontrollable desire for sex, women must moderate their behaviour so that they don’t tempt men to rape them.
(d) Men are sexually aggressive/dominant (or should be); women are not  (or shouldn’t be).
(e) A woman’s worth is determined, in part, by her sexual purity; a man’s worth is determined, in part, by his sexual prowess.

Of course, these beliefs are outdated, and not held by all people. However, they are pervasive and we do see remnants of them in parts of Western society and in some non‑Western cultures.

What Clinical Ethics Can Learn From Decision Science

Michele C. Gornick and Brian J. Zikmund-Fisher
AMA J Ethics. 2019;21(10):E906-912.
doi: 10.1001/amajethics.2019.906.

Abstract

Many components of decision science are relevant to clinical ethics practice. Decision science encourages thoughtful definition of options, clarification of information needs, and acknowledgement of the heterogeneity of people’s experiences and underlying values. Attention to decision-making processes reminds participants in consultations that how decisions are made and how information is provided can change a choice. Decision science also helps reveal affective forecasting errors (errors in predictions about how one will feel in a future situation) that happen when people consider possible future health states and suggests strategies for correcting these and other kinds of biases. Implementation of decision science innovations is not always feasible or appropriate in ethics consultations, but their uses increase the likelihood that an ethics consultation process will generate choices congruent with patients’ and families’ values.

Here is an excerpt:

Decision Science in Ethics Practice

Clinical ethicists can support informed, value-congruent decision making in ethically complex clinical situations by working with stakeholders to identify and address biases and the kinds of barriers just discussed. Doing so requires constantly comparing actual decision-making processes with ideal decision-making processes, responding to information deficits, and integrating stakeholder values. One key step involves regularly urging clinicians to clarify both available options and possible outcomes and encouraging patients to consider both their values and the possible meanings of different outcomes.