Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, January 15, 2018

The media needs to do more to elevate a national conversation about ethics

Arthur Caplan
Poynter.com
Originally December 21, 2017

Here is an excerpt:

Obviously unethical conduct has been around forever and will be into the foreseeable future. That said, it is important that the leaders of this nation and, more importantly, those leading our key institutions and professions reaffirm their commitment to the view that there are higher values worth pursuing in a just society. The fact that so many fail to live up to basic values does not mean that the values are meaningless, wrong or misplaced. They aren’t. It is rather that the organizations and professions where the epidemic of moral failure is burgeoning have put other values, often power and profits, ahead of morality.

There is no simple fix for hypocrisy. Egoism, the gross abuse of power and self-indulgence, is a very tough moral opponent in an individualistic society like America. Short-term reward is deceptively more attractive then slogging out the virtues in the name of the long haul. If we are to prepare our children to succeed, then attending to their moral development is as important as anything we can do. If our leaders are to truly lead then we have to reward those who do, not those who don’t, won’t or can’t. Are we?

The article is here.

Lesion network localization of criminal behavior

R. Ryan Darby Andreas Horn, Fiery Cushman, and Michael D. Fox
The Proceedings of the National Academy of Sciences

Abstract

Following brain lesions, previously normal patients sometimes exhibit criminal behavior. Although rare, these cases can lend unique insight into the neurobiological substrate of criminality. Here we present a systematic mapping of lesions with known temporal association to criminal behavior, identifying 17 lesion cases. The lesion sites were spatially heterogeneous, including the medial prefrontal cortex, orbitofrontal cortex, and different locations within the bilateral temporal lobes. No single brain region was damaged in all cases. Because lesion-induced symptoms can come from sites connected to the lesion location and not just the lesion location itself, we also identified brain regions functionally connected to each lesion location. This technique, termed lesion network mapping, has recently identified regions involved in symptom generation across a variety of lesion-induced disorders. All lesions were functionally connected to the same network of brain regions. This criminality-associated connectivity pattern was unique compared with lesions causing four other neuropsychiatric syndromes. This network includes regions involved in morality, value-based decision making, and theory of mind, but not regions involved in cognitive control or empathy. Finally, we replicated our results in a separate cohort of 23 cases in which a temporal relationship between brain lesions and criminal behavior was implied but not definitive. Our results suggest that lesions in criminals occur in different brain locations but localize to a unique resting state network, providing insight into the neurobiology of criminal behavior.

Significance

Cases like that of Charles Whitman, who murdered 16 people after growth of a brain tumor, have sparked debate about why some brain lesions, but not others, might lead to criminal behavior. Here we systematically characterize such lesions and compare them with lesions that cause other symptoms. We find that lesions in multiple different brain areas are associated with criminal behavior. However, these lesions all fall within a unique functionally connected brain network involved in moral decision making. Furthermore, connectivity to competing brain networks predicts the abnormal moral decisions observed in these patients. These results provide insight into why some brain lesions, but not others, might predispose to criminal behavior, with potential neuroscience, medical, and legal implications.

The article is here.

Sunday, January 14, 2018

The Criminalization of Compliance

Todd Haugh
92 Notre Dame L. Rev. 1215 (2017).

Abstract

Corporate compliance is becoming increasingly “criminalized.” What began as a means of industry self-regulation has morphed into a multi-billion-dollar effort to avoid government intervention in business, specifically criminal and quasi-criminal investigations and prosecutions. In order to avoid application of the criminal law, companies have adopted compliance programs that are motivated by and mimic that law, using the precepts of criminal legislation, enforcement, and adjudication to advance their compliance goals. This approach to compliance is inherently flawed, however—it can never be fully effective in abating corporate wrongdoing. Criminalized compliance regimes are inherently ineffective because they impose unintended behavioral consequences on corporate employees. Employees subject to criminalized compliance have greater opportunities to rationalize their future unethical or illegal behavior. Rationalizations are a key component in the psychological process necessary for the commission of corporate crime—they allow offenders to square their self-perception as “good people” with the illegal behavior they are contemplating, thereby allowing the behavior to go forward. Criminalized compliance regimes fuel these rationalizations, and in turn, bad corporate conduct. By importing into the corporation many of the criminal law’s delegitimizing features, criminalized compliance creates space for rationalizations, facilitating the necessary precursors to the commission of white collar and corporate crime. The result is that many compliance programs, by mimicking the criminal law in hopes of reducing employee misconduct, are actually fostering it. This insight, which offers a new way of conceptualizing corporate compliance, explains the ineffectiveness of many compliance programs and also suggests how companies might go about fixing them.

The article is here.

Saturday, January 13, 2018

The costs of being consequentialist: Social perceptions of those who harm and help for the greater good

Everett, J. A. C., Faber, N. S., Savulescu, J., & Crockett, M. (2017, December 15).
The Cost of Being Consequentialist. Retrieved from psyarxiv.com/a2kx6

Abstract

Previous work has demonstrated that people are more likely to trust “deontological” agents who reject instrumentally harming one person to save a greater number than “consequentialist” agents who endorse such harm in pursuit of the greater good. It has been argued that these differential social perceptions of deontological vs. consequentialist agents could explain the higher prevalence of deontological moral intuitions. Yet consequentialism involves much more than decisions to endorse instrumental harm: another critical dimension is impartial beneficence, defined as the impartial maximization of the greater good, treating the well-being of every individual as equally important. In three studies (total N = 1,634), we investigated preferences for deontological vs. consequentialist social partners in both the domains of instrumental harm and impartial beneficence, and consider how such preferences vary across different types of social relationships.  Our results demonstrate consistent preferences for deontological over consequentialist agents across both domains of instrumental harm and impartial beneficence: deontological agents were viewed as more moral and trustworthy, and were actually entrusted with more money in a resource distribution task. However, preferences for deontological agents were stronger when those preferences were revealed via aversion to instrumental harm than impartial beneficence. Finally, in the domain of instrumental harm, deontological agents were uniformly preferred across a variety of social roles, but in the domain of impartial beneficence, people prefer deontologists for roles requiring direct interaction (friend, spouse, boss) but not for more distant roles with little-to-no personal interaction (political leader).

The research is here.

Friday, January 12, 2018

The Normalization of Corruption in Organizations

Blake E. Ashforth and Vikas Anand
Research in Organizational Behavior
Volume 25, 2003, Pages 1-52

Abstract

Organizational corruption imposes a steep cost on society, easily dwarfing that of street crime. We examine how corruption becomes normalized, that is, embedded in the organization such that it is more or less taken for granted and perpetuated. We argue that three mutually reinforcing processes underlie normalization: (1) institutionalization, where an initial corrupt decision or act becomes embedded in structures and processes and thereby routinized; (2) rationalization, where self-serving ideologies develop to justify and perhaps even valorize corruption; and (3) socialization, where naı̈ve newcomers are induced to view corruption as permissible if not desirable. The model helps explain how otherwise morally upright individuals can routinely engage in corruption without experiencing conflict, how corruption can persist despite the turnover of its initial practitioners, how seemingly rational organizations can engage in suicidal corruption and how an emphasis on the individual as evildoer misses the point that systems and individuals are mutually reinforcing.

The article is here.

The Age of Outrage

Jonathan Haidt
Essay derived from a speech in City Journal
December 17, 2017

Here is an excerpt:

When we look back at the ways our ancestors lived, there’s no getting around it: we are tribal primates. We are exquisitely designed and adapted by evolution for life in small societies with intense, animistic religion and violent intergroup conflict over territory. We love tribal living so much that we invented sports, fraternities, street gangs, fan clubs, and tattoos. Tribalism is in our hearts and minds. We’ll never stamp it out entirely, but we can minimize its effects because we are a behaviorally flexible species. We can live in many different ways, from egalitarian hunter-gatherer groups of 50 individuals to feudal hierarchies binding together millions. And in the last two centuries, a lot of us have lived in large, multi-ethnic secular liberal democracies. So clearly that is possible. But how much margin of error do we have in such societies?

Here is the fine-tuned liberal democracy hypothesis: as tribal primates, human beings are unsuited for life in large, diverse secular democracies, unless you get certain settings finely adjusted to make possible the development of stable political life. This seems to be what the Founding Fathers believed. Jefferson, Madison, and the rest of those eighteenth-century deists clearly did think that designing a constitution was like designing a giant clock, a clock that might run forever if they chose the right springs and gears.

Thankfully, our Founders were good psychologists. They knew that we are not angels; they knew that we are tribal creatures. As Madison wrote in Federalist 10: “the latent causes of faction are thus sown in the nature of man.” Our Founders were also good historians; they were well aware of Plato’s belief that democracy is the second worst form of government because it inevitably decays into tyranny. Madison wrote in Federalist 10 about pure or direct democracies, which he said are quickly consumed by the passions of the majority: “such democracies have ever been spectacles of turbulence and contention . . . and have in general been as short in their lives as they have been violent in their deaths.”

So what did the Founders do? They built in safeguards against runaway factionalism, such as the division of powers among the three branches, and an elaborate series of checks and balances. But they also knew that they had to train future generations of clock mechanics. They were creating a new kind of republic, which would demand far more maturity from its citizens than was needed in nations ruled by a king or other Leviathan.

The full speech is here.

Thursday, January 11, 2018

Is Blended Intelligence the Next Stage of Human Evolution?

Richard Yonck
TED Talk
Published December 8, 2017

What is the future of intelligence? Humanity is still an extremely young species and yet our prodigious intellects have allowed us to achieve all manner of amazing accomplishments in our relatively short time on this planet, most especially during the past couple of centuries or so. Yet, it would be short-sighted of us to assume our species has reached the end of our journey, having become as intelligent as we will ever be. On the contrary, it seems far more likely that if we should survive our “infancy," there is probably much more time ahead of us than there is looking back. If that’s the case, then our descendants of only a few thousand years from now will probably be very, very different from you and I.


The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems

Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. 
IEEE, 2017.

Introduction

As the use and impact of autonomous and intelligent systems (A/IS) become pervasive, we need to establish societal and policy guidelines in order for such systems to remain human-centric, serving humanity’s values and ethical principles. These systems have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust between people and technology that is needed for its fruitful, pervasive use in our daily lives.

To be able to contribute in a positive, non-dogmatic way, we, the techno-scientific communities, need to enhance our self-reflection, we need to have an open and honest debate around our imaginary, our sets of explicit or implicit values, our institutions, symbols and representations.

Eudaimonia, as elucidated by Aristotle, is a practice that defines human well-being as the highest virtue for a society. Translated roughly as “flourishing,” the benefits of eudaimonia begin by conscious contemplation, where ethical considerations help us define how we wish to live.

Whether our ethical practices are Western (Aristotelian, Kantian), Eastern (Shinto, Confucian), African (Ubuntu), or from a different tradition, by creating autonomous and intelligent systems that explicitly honor inalienable human rights and the beneficial values of their users, we can prioritize the increase of human well-being as our metric for progress in the algorithmic age. Measuring and honoring the potential of holistic economic prosperity should become more important than pursuing one-dimensional goals like productivity increase or GDP growth.

The guidelines are here.

Wednesday, January 10, 2018

Failing better

Erik Angner
BPP Blog, the companion blog to the new journal Behavioural Public Policy
Originally posted June 2, 2017

Cass R. Sunstein’s ‘Nudges That Fail’ explores why some nudges work, why some fail, and what should be done in the face of failure. It’s a useful contribution in part because it reminds us that nudging – roughly speaking, the effort to improve people’s welfare by helping them make better choices without interfering with their liberty or autonomy – is harder than it might seem. When people differ in beliefs, values, and preferences, or when they differ in their responses to behavioral interventions, for example, it may be difficult to design a nudge that benefits at least some without violating anyone’s liberty or autonomy. But the paper is a useful contribution also because it suggests concrete, positive steps that may be taken to help us get better simultaneously at enhancing welfare and at respecting liberty and autonomy.

(cut)

Moreover, even if a nudge is on the net welfare enhancing and doesn’t violate any other values, it does not follow that it should be implemented. As economists are fond of telling you, everything has an opportunity cost, and so do nudges. If whatever resources would be used in the implementation of the nudge could be put to better use elsewhere, we would have reason not to implement it. If we did anyway, we would be guilty of the Econ 101 fallacy of ignoring opportunity costs, which would be embarrassing.

The blog post is here.