Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, November 17, 2019

The Psychology of Existential Risk: Moral Judgments about Human Extinction

Stefan Schubert, Lucius Caviola & Nadira S. Faber
Scientific Reports volume 9, Article number: 15100 (2019)

Abstract

The 21st century will likely see growing risks of human extinction, but currently, relatively small resources are invested in reducing such existential risks. Using three samples (UK general public, US general public, and UK students; total N = 2,507), we study how laypeople reason about human extinction. We find that people think that human extinction needs to be prevented. Strikingly, however, they do not think that an extinction catastrophe would be uniquely bad relative to near-extinction catastrophes, which allow for recovery. More people find extinction uniquely bad when (a) asked to consider the extinction of an animal species rather than humans, (b) asked to consider a case where human extinction is associated with less direct harm, and (c) they are explicitly prompted to consider long-term consequences of the catastrophes. We conclude that an important reason why people do not find extinction uniquely bad is that they focus on the immediate death and suffering that the catastrophes cause for fellow humans, rather than on the long-term consequences. Finally, we find that (d) laypeople—in line with prominent philosophical arguments—think that the quality of the future is relevant: they do find extinction uniquely bad when this means forgoing a utopian future.

Discussion

Our studies show that people find that human extinction is bad, and that it is important to prevent it. However, when presented with a scenario involving no catastrophe, a near-extinction catastrophe and an extinction catastrophe as possible outcomes, they do not see human extinction as uniquely bad compared with non-extinction. We find that this is partly because people feel strongly for the victims of the catastrophes, and therefore focus on the immediate consequences of the catastrophes. The immediate consequences of near-extinction are not that different from those of extinction, so this naturally leads them to find near-extinction almost as bad as extinction. Another reason is that they neglect the long-term consequences of the outcomes. Lastly, their empirical beliefs about the quality of the future make a difference: telling them that the future will be extraordinarily good makes more people find extinction uniquely bad.

The research is here.

Saturday, November 16, 2019

Moral grandstanding in public discourse: Status-seeking motives as a potential explanatory mechanism in predicting conflict

Grubbs JB, Warmke B, Tosi J, James AS, Campbell WK
(2019) PLoS ONE 14(10): e0223749.
https://doi.org/10.1371/journal.pone.0223749

Abstract

Public discourse is often caustic and conflict-filled. This trend seems to be particularly evident when the content of such discourse is around moral issues (broadly defined) and when the discourse occurs on social media. Several explanatory mechanisms for such conflict have been explored in recent psychological and social-science literatures. The present work sought to examine a potentially novel explanatory mechanism defined in philosophical literature: Moral Grandstanding. According to philosophical accounts, Moral Grandstanding is the use of moral talk to seek social status. For the present work, we conducted six studies, using two undergraduate samples (Study 1, N = 361; Study 2, N = 356); a sample matched to U.S. norms for age, gender, race, income, Census region (Study 3, N = 1,063); a YouGov sample matched to U.S. demographic norms (Study 4, N = 2,000); and a brief, one-month longitudinal study of Mechanical Turk workers in the U.S. (Study 5, Baseline N = 499, follow-up n = 296), and a large, one-week YouGov sample matched to U.S. demographic norms (Baseline N = 2,519, follow-up n = 1,776). Across studies, we found initial support for the validity of Moral Grandstanding as a construct. Specifically, moral grandstanding motivation was associated with status-seeking personality traits, as well as greater political and moral conflict in daily life.

Conclusion

Public discourse regarding morally charged topics is prone to conflict and polarization, particularly on social media platforms that tend to facilitate ideological echo chambers. The present study introduces an interdisciplinary construct called Moral Grandstanding as possible a contributing factor to this phenomenon. MG links various domains of psychology with moral philosophy to describe the use of public moral speech to enhance one’s status or image in the eyes of others. Within the present work, we focused on the motivation to engage in MG. Specifically, MG Motivation is framed as an expression of status-seeking drives in the domain of public discourse. Self-reported motivations underlying grandstanding behaviors seem to be consistent with the construct of status-seeking more broadly, seeming to represent prestige and dominance striving, both of which were found to be associated with greater interpersonal conflict and polarization. These results were consistently replicated in samples of U.S. undergraduates and nationally representative cross-sectional of U.S. residents, and longitudinal studies of adults in the U.S. Collectively, these results suggest that MG Motivation is a useful psychological phenomenon that has potential to aid our understanding of the intraindividual mechanisms driving caustic public discourse.

Friday, November 15, 2019

Gartner Fellow discusses ethics in artificial intelligence

Teena Maddox
techrepublic.com
Originally published October 28, 2019

Here is an excerpt:

There are tons of ways you can use AI ethically and also unethically. One way that is typically being cited is, for instance, using attributes of people that shouldn't be used. For instance, when granting somebody a mortgage or access to something or making other decisions. Racial profiling is typically mentioned as an example. So, you need to be mindful which attributes are being used for making decisions. How do the algorithms learn? Other ways of abuse of AI is, for instance, with autonomous killer drones. Would we allow algorithms to decide who gets bombed by a drone and who does not? And most people seem to agree that autonomous killer drones are not a very good idea.

The most important thing that a developer can do in order to create ethical AI is to not think of this as technology, but an exercise in self-reflection. Developers have certain biases. They have certain characteristics themselves. For instance, developers are keen to search for the optimal solution of a problem; it is built into their brains. But ethics is a very pluralistic thing. There's different people who have different ideas. There is not one optimal answer of what is good and bad. First and foremost, developers should be aware of their own ethical biases of what they think is good and bad and create an environment of diversity where they test those assumptions and where they test their results. The the developer brain isn't the only brain or type of brain that is out there, to say the least.

So, AI and ethics is really a story of hope. It's for the very first time that a discussion of ethics is taking place before the widespread implementation, unlike in previous rounds where the ethical considerations were taking place after the effects.

The info is here.

Is Moral Relativism Really a Problem?

Is Moral Relativism Really a Problem?Thomas Polzler
Scientific American Blog
Originally published October 16, 2019

Here is an excerpt:

Warnings against moral relativism are most often based on theoretical speculation. Critics consider the view’s nature and add certain assumptions about human psychology. Then they infer how being a relativist might affect a person’s behavior. For example, for a relativist, even actions such as murder or rape can never be really or absolutely wrong; they are only wrong to the extent that the relativist or most members of his or her culture believe them to be so.

One may therefore worry that relativists are less motivated to refrain from murdering and raping than people who regard these actions as objectively wrong. While this scenario may sound plausible, however, it is important to note that relativism’s effects can only ultimately be determined by relevant studies.

So far, scientific investigations do not support the suspicion that moral relativism is problematic. True, there are two studies that do suggest such a conclusion. In one of them, participants were led to think about morality in either relativist or objectivist terms. It turned out that subjects in the relativist condition were more likely to cheat in a lottery and to state that they would be willing to steal than those in the objectivist condition. In the other study, participants who had been exposed to relativist ideas were less likely to donate to charity than those who had been exposed to objectivist ones.

That said, there is also evidence that associates moral relativism with positive behaviors. In one of her earlier studies, Wright and her colleagues informed their participants that another person disagreed with one of their moral judgments. Then the researchers measured the subjects’ degree of tolerance for this person’s divergent moral view. For example, participants were asked how willing they would be to interact with the person, how willing they would be to help him or her and how comfortable they generally were with another individual denying one of their moral judgments. It turned out that subjects with relativist leanings were more tolerant toward the disagreeing person than those who had tended toward objectivism.

The info is here.

Thursday, November 14, 2019

Assessing risk, automating racism

Embedded ImageRuha Benjamin
Science  25 Oct 2019:
Vol. 366, Issue 6464, pp. 421-422

Here is an excerpt:

Practically speaking, their finding means that if two people have the same risk score that indicates they do not need to be enrolled in a “high-risk management program,” the health of the Black patient is likely much worse than that of their White counterpart. According to Obermeyer et al., if the predictive tool were recalibrated to actual needs on the basis of the number and severity of active chronic illnesses, then twice as many Black patients would be identified for intervention. Notably, the researchers went well beyond the algorithm developers by constructing a more fine-grained measure of health outcomes, by extracting and cleaning data from electronic health records to determine the severity, not just the number, of conditions. Crucially, they found that so long as the tool remains effective at predicting costs, the outputs will continue to be racially biased by design, even as they may not explicitly attempt to take race into account. For this reason, Obermeyer et al. engage the literature on “problem formulation,” which illustrates that depending on how one defines the problem to be solved—whether to lower health care costs or to increase access to care—the outcomes will vary considerably.

Cooperation and Learning in Unfamiliar Situations

McAuliffe, W. H. B., Burton-Chellew, M. N., &
McCullough, M. E. (2019).
Current Directions in Psychological Science, 
28(5), 436–440. https://doi.org/10.1177/0963721419848673

Abstract

Human social life is rife with uncertainty. In any given encounter, one can wonder whether cooperation will generate future benefits. Many people appear to resolve this dilemma by initially cooperating, perhaps because (a) encounters in everyday life often have future consequences, and (b) the costs of alienating oneself from long-term social partners often outweighed the short-term benefits of acting selfishly over our evolutionary history. However, because cooperating with other people does not always advance self-interest, people might also learn to withhold cooperation in certain situations. Here, we review evidence for two ideas: that people (a) initially cooperate or not depending on the incentives that are typically available in their daily lives and (b) also learn through experience to adjust their cooperation on the basis of the incentives of unfamiliar situations. We compare these claims with the widespread view that anonymously helping strangers in laboratory settings is motivated by altruistic desires. We conclude that the evidence is more consistent with the idea that people stop cooperating in unfamiliar situations because they learn that it does not help them, either financially or through social approval.

Conclusion

Experimental economists have long emphasized the role of learning in social decision-making (e.g., Binmore, 1999). However, cooperation researchers have only recently considered how peoples’ past social interactions shape their expectations in novel social situations. An important lesson from the research reviewed here is that people’s behavior in any single situation is not necessarily a direct read-out of how selfish or altruistic they are, especially if the situation’s incentives differ from what they normally encounter in everyday life.

Wednesday, November 13, 2019

Dynamic Moral Judgments and Emotions

Magda Osman
Published Online June 2015 in SciRes.
http://www.scirp.org/journal/psych

Abstract

We may experience strong moral outrage when we read a news headline that describes a prohibited action, but when we gain additional information by reading the main news story, do our emotional experiences change at all, and if they do in what way do they change? In a single online study with 80 participants the aim was to examine the extent to which emotional experiences (disgust, anger) and moral judgments track changes in information about a moral scenario. The evidence from the present study suggests that we systematically adjust our moral judgments and our emotional experiences as a result of exposure to further information about the morally dubious action referred to in a moral scenario. More specifically, the way in which we adjust our moral judgments and emotions appears to be based on information signalling whether a morally dubious act is permitted or prohibited.

From the Discussion

The present study showed that moral judgments changed in response to different details concerning the moral scenarios, and while participants gave the most severe judgments for the initial limited information regarding the scenario (i.e. the headline), they adjusted the severity of their judgments downwards as more information was provided (i.e. main story, conclusion). In other words, when context was provided for why a morally dubious action was carried out, people used this to inform their later judgments and consciously integrated this new information into their judgments of the action. Crucially, this reflects the fact that judgments and emotions are not fixed, and that they are likely to operate on rational processes (Huebner, 2011, 2014; Teper et al., 2015). More to the point, this evidence suggests that there may well be an integrated representation of the moral scenario that is based on informational content as well as personal emotional experiences that signal the valance on which the information should be judged. The evidence from the present study suggests that both moral judgments and emotional experiences change systematically in response to changes in information that critically concern the way in which a morally dubious action should be evaluated.

A pdf can be downloaded here.

MIT Creates World’s First Psychopath AI By Feeding It Reddit Violent Content

MIT Creates World's First Psychopath AI By Feeding It Reddit Violent ContentNavin Bondade
www.techgrabyte.com
Originally posted October 2019

The state of the psychopathic is wider and darker in human intelligence that we haven’t fully understood yet, but still, scientists have given a try and to implement Psychopathism in Artificial Intelligence.

Scientists at MIT have created the world’s First Psychopath AI called Norman. The purpose of Norman AI is to demonstrate that AI cannot be unfair and biased unless such data is fed into it.

MIT’s Scientists have created Norman by training it on violent and gruesome content like images of people dying in gruesome circumstances from an unnamed Reddit page before showing it a series of Rorschach inkblot tests.

The Scientists created a dataset from this unnamed Reddit page and trained Norman to perform image captioning. This data is dedicated to documents and observe the disturbing reality of death.

The info is here.

Tuesday, November 12, 2019

Errors in Moral Forecasting: Perceptions of Affect Shape the Gap Between Moral Behaviors and Moral Forecasts

Teper, R., Zhong, C.‐B., and Inzlicht, M. (2015)
Social and Personality Psychology Compass, 9, 1– 14,
doi: 10.1111/spc3.12154

Abstract

Within the past decade, the field of moral psychology has begun to disentangle the mechanics behind moral judgments, revealing the vital role that emotions play in driving these processes. However, given the well‐documented dissociation between attitudes and behaviors, we propose that an equally important issue is how emotions inform actual moral behavior – a question that has been relatively ignored up until recently. By providing a review of recent studies that have begun to explore how emotions drive actual moral behavior, we propose that emotions are instrumental in fueling real‐life moral actions. Because research examining the role of emotional processes on moral behavior is currently limited, we push for the use of behavioral measures in the field in the hopes of building a more complete theory of real‐life moral behavior.

Conclusion

Long gone are the days when emotion was written off as a distractor or a roadblock to effective moral decision making. There now exists a great deal of evidence bolstering the idea that emotions are actually necessary for initiating adaptive behavior (Bechara, 2004; Damasio, 1994; Panskepp & Biven, 2012). Furthermore, evidence from the field of moral psychology points to the fact that individuals rely quite heavily on emotional and intuitive processes when engaging in moral judgments (e.g. Haidt, 2001). However, up until recently, the playing field of moral psychology has been heavily dominated by research revolving around moral judgments alone, especially when investigating the role that emotions play in motivating moral decision-making.

A pdf can be downloaded here.