Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Imperative. Show all posts
Showing posts with label Moral Imperative. Show all posts

Thursday, November 19, 2020

The Psychology of Moral Conviction

Skitka, L., Hanson, B. and others
Annual Review of Psychology
(2021). 72:1.

Abstract

This review covers theory and research on the psychological characteristics and consequences of attitudes that are experienced as moral convictions, that is, attitudes that people perceive as grounded in a fundamental distinction between right and wrong. Morally convicted attitudes represent something psychologically distinct from other constructs (e.g., strong but nonmoral attitudes or religious beliefs), are perceived as universally and objectively true, and are comparatively immune to authority or peer influence. Variance in moral conviction also predicts important social and political consequences. Stronger moral conviction about a given attitude object, for example, is associated with greater intolerance of attitude dissimilarity, resistance to procedural solutions for conflict about that issue, and increased political engagement and volunteerism in that attitude domain. Finally, we review recent research that explores the processes that lead to attitude moralization; we integrate these efforts and conclude with a new domain theory of attitude moralization.

From the Conclusion

As this review has revealed, attitudes held with moral conviction have a psychological profile that corresponds well with the domain theory of attitudes. Moral convictions differ from otherwise strong but non-moral attitudes by being perceived as more objectively and universally true, authority independent, and obligatory. In addition to these distinctions, moral convictions predicts the degree to which people perceive that the ends justify the means in achieving morally preferred outcomes, their unwillingness to compromise on morally convicted issues, and increased political engagement and willingness to engage in volunteerism on the one hand, and acceptance of lying, violence, and cheating to achieve preferred ends on the other.

Wednesday, December 13, 2017

Moralized Rationality: Relying on Logic and Evidence in the Formation and Evaluation of Belief Can Be Seen as a Moral Issue

Tomas Ståhl, Maarten P. Zaal, and Linda J. Skitka
PLOS One
Published November 16, 2017

Abstract

In the present article we demonstrate stable individual differences in the extent to which a reliance on logic and evidence in the formation and evaluation of beliefs is perceived as a moral virtue, and a reliance on less rational processes is perceived as a vice. We refer to this individual difference variable as moralized rationality. Eight studies are reported in which an instrument to measure individual differences in moralized rationality is validated. Results show that the Moralized Rationality Scale (MRS) is internally consistent, and captures something distinct from the personal importance people attach to being rational (Studies 1–3). Furthermore, the MRS has high test-retest reliability (Study 4), is conceptually distinct from frequently used measures of individual differences in moral values, and it is negatively related to common beliefs that are not supported by scientific evidence (Study 5). We further demonstrate that the MRS predicts morally laden reactions, such as a desire for punishment, of people who rely on irrational (vs. rational) ways of forming and evaluating beliefs (Studies 6 and 7). Finally, we show that the MRS uniquely predicts motivation to contribute to a charity that works to prevent the spread of irrational beliefs (Study 8). We conclude that (1) there are stable individual differences in the extent to which people moralize a reliance on rationality in the formation and evaluation of beliefs, (2) that these individual differences do not reduce to the personal importance attached to rationality, and (3) that individual differences in moralized rationality have important motivational and interpersonal consequences.

The research is here.

Monday, December 11, 2017

To think critically, you have to be both analytical and motivated

John Timmer
ARS Techica
Originally published November 15, 2017

Here is an excerpt:

One of the proposed solutions to this issue is to incorporate more critical thinking into our education system. But critical thinking is more than just a skill set; you have to recognize when to apply it, do so effectively, and then know how to respond to the results. Understanding what makes a person effective at analyzing fake news and conspiracy theories has to take all of this into account. A small step toward that understanding comes from a recently released paper, which looks at how analytical thinking and motivated skepticism interact to make someone an effective critical thinker.

Valuing rationality

The work comes courtesy of the University of Illinois at Chicago's Tomas Ståhl and Jan-Willem van Prooijen at VU Amsterdam. This isn't the first time we've heard from Ståhl; last year, he published a paper on what he termed "moralizing epistemic rationality." In it, he looked at people's thoughts on the place critical thinking should occupy in their lives. The research identified two classes of individuals: those who valued their own engagement with critical thinking, and those who viewed it as a moral imperative that everyone engage in this sort of analysis.

The information is here.

The target article is here.

Monday, December 4, 2017

Ray Kurzweil on Turing Tests, Brain Extenders, and AI Ethics

Nancy Kaszerman
Wired.com
Originally posted November 13, 2017

Here is an excerpt:

There has been a lot of focus on AI ethics, how to keep the technology safe, and it's kind of a polarized discussion like a lot of discussions nowadays. I've actually talked about both promise and peril for quite a long time. Technology is always going to be a double-edged sword. Fire kept us warm, cooked our food, and burned down our houses. These technologies are much more powerful. It's also a long discussion, but I think we should go through three phases, at least I did, in contemplating this. First is delight at the opportunity to overcome age-old afflictions: poverty, disease, and so on. Then alarm that these technologies can be destructive and cause even existential risks. And finally I think where we need to come out is an appreciation that we have a moral imperative to continue progress in these technologies because, despite the progress we've made—and that's a-whole-nother issue, people think things are getting worse but they're actually getting better—there's still a lot of human suffering to be overcome. It's only continued progress particularly in AI that's going to enable us to continue overcoming poverty and disease and environmental degradation while we attend to the peril.

And there's a good framework for doing that. Forty years ago, there were visionaries who saw both the promise and the peril of biotechnology, basically reprogramming biology away from disease and aging. So they held a conference called the Asilomar Conference at the conference center in Asilomar, and came up with ethical guidelines and strategies—how to keep these technologies safe. Now it's 40 years later. We are getting clinical impact of biotechnology. It's a trickle today, it'll be a flood over the next decade. The number of people who have been harmed either accidentally or intentionally by abuse of biotechnology so far has been zero. It's a good model for how to proceed.

The article is here.

Monday, August 14, 2017

Moral alchemy: How love changes norms

Rachel W. Magid and Laura E.Schulz
Cognition
Volume 167, October 2017, Pages 135-150

Abstract

We discuss a process by which non-moral concerns (that is concerns agreed to be non-moral within a particular cultural context) can take on moral content. We refer to this phenomenon as moral alchemy and suggest that it arises because moral obligations of care entail recursively valuing loved ones’ values, thus allowing propositions with no moral weight in themselves to become morally charged. Within this framework, we predict that when people believe a loved one cares about a behavior more than they do themselves, the moral imperative to care about the loved one’s interests will raise the value of that behavior, such that people will be more likely to infer that third parties will see the behavior as wrong (Experiment 1) and the behavior itself as more morally important (Experiment 2) than when the same behaviors are considered outside the context of a caring relationship. The current study confirmed these predictions.

The article is here.