Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Logic. Show all posts
Showing posts with label Logic. Show all posts

Thursday, July 4, 2019

Exposure to opposing views on social media can increase political polarization

Christopher Bail, Lisa Argyle, and others
PNAS September 11, 2018 115 (37) 9216-9221; first published August 28, 2018 https://doi.org/10.1073/pnas.1804840115

Abstract

There is mounting concern that social media sites contribute to political polarization by creating “echo chambers” that insulate people from opposing views about current events. We surveyed a large sample of Democrats and Republicans who visit Twitter at least three times each week about a range of social policy issues. One week later, we randomly assigned respondents to a treatment condition in which they were offered financial incentives to follow a Twitter bot for 1 month that exposed them to messages from those with opposing political ideologies (e.g., elected officials, opinion leaders, media organizations, and nonprofit groups). Respondents were resurveyed at the end of the month to measure the effect of this treatment, and at regular intervals throughout the study period to monitor treatment compliance. We find that Republicans who followed a liberal Twitter bot became substantially more conservative posttreatment. Democrats exhibited slight increases in liberal attitudes after following a conservative Twitter bot, although these effects are not statistically significant. Notwithstanding important limitations of our study, these findings have significant implications for the interdisciplinary literature on political polarization and the emerging field of computational social science.

The research is here.

Happy Fourth of July!!!

Friday, May 24, 2019

Immutable morality: Even God could not change some moral facts

Madeline Reinecke & Zachary Horne
PsyArXiv
Last edited December 24, 2018

Abstract

The idea that morality depends on God is a widely held belief. This belief entails that the moral “facts” could be otherwise because, in principle, God could change them. Yet, some moral propositions seem so obviously true (e.g., the immorality of killing someone just for pleasure) that it is hard to imagine how they could be otherwise. In two experiments, we investigated people’s intuitions about the immutability of moral facts. Participants judged whether it was even possible, or possible for God, to change moral, logical, and physical facts. In both experiments, people judged that altering some moral facts was impossible—not even God could turn morally wrong acts into morally right acts. Strikingly, people thought that God could make physically impossible and logically impossible events occur. These results demonstrate the strength of people’s metaethical commitments and shed light on the nature of morality and its centrality to thinking and reasoning.

The research is here.

Wednesday, December 13, 2017

Moralized Rationality: Relying on Logic and Evidence in the Formation and Evaluation of Belief Can Be Seen as a Moral Issue

Tomas Ståhl, Maarten P. Zaal, and Linda J. Skitka
PLOS One
Published November 16, 2017

Abstract

In the present article we demonstrate stable individual differences in the extent to which a reliance on logic and evidence in the formation and evaluation of beliefs is perceived as a moral virtue, and a reliance on less rational processes is perceived as a vice. We refer to this individual difference variable as moralized rationality. Eight studies are reported in which an instrument to measure individual differences in moralized rationality is validated. Results show that the Moralized Rationality Scale (MRS) is internally consistent, and captures something distinct from the personal importance people attach to being rational (Studies 1–3). Furthermore, the MRS has high test-retest reliability (Study 4), is conceptually distinct from frequently used measures of individual differences in moral values, and it is negatively related to common beliefs that are not supported by scientific evidence (Study 5). We further demonstrate that the MRS predicts morally laden reactions, such as a desire for punishment, of people who rely on irrational (vs. rational) ways of forming and evaluating beliefs (Studies 6 and 7). Finally, we show that the MRS uniquely predicts motivation to contribute to a charity that works to prevent the spread of irrational beliefs (Study 8). We conclude that (1) there are stable individual differences in the extent to which people moralize a reliance on rationality in the formation and evaluation of beliefs, (2) that these individual differences do not reduce to the personal importance attached to rationality, and (3) that individual differences in moralized rationality have important motivational and interpersonal consequences.

The research is here.

Sunday, April 2, 2017

The Problem of Evil: Crash Course Philosophy #13

Published on May 9, 2016

After weeks of exploring the existence of nature of god, today Hank explores one of the biggest problems in theism, and possibly the biggest philosophical question humanity faces: why is there evil?


Wednesday, February 22, 2017

Moralized Rationality: Relying on Logic and Evidence in the Formation and Evaluation of Belief Can Be Seen as a Moral Issue

Ståhl T, Zaal MP, Skitka LJ (2016)
PLoS ONE 11(11): e0166332. doi:10.1371/journal.pone.0166332

Abstract

In the present article we demonstrate stable individual differences in the extent to which a reliance on logic and evidence in the formation and evaluation of beliefs is perceived as a moral virtue, and a reliance on less rational processes is perceived as a vice. We refer to this individual difference variable as moralized rationality. Eight studies are reported in which an instrument to measure individual differences in moralized rationality is validated. Results show that the Moralized Rationality Scale (MRS) is internally consistent, and captures something distinct from the personal importance people attach to being rational (Studies 1–3). Furthermore, the MRS has high test-retest reliability (Study 4), is conceptually distinct from frequently used measures of individual differences in moral values, and it is negatively related to common beliefs that are not supported by scientific evidence (Study 5). We further demonstrate that the MRS predicts morally laden reactions, such as a desire for punishment, of people who rely on irrational (vs. rational) ways of forming and evaluating beliefs (Studies 6 and 7). Finally, we show that the MRS uniquely predicts motivation to contribute to a charity that works to prevent the spread of irrational beliefs (Study 8). We conclude that (1) there are stable individual differences in the extent to which people moralize a reliance on rationality in the formation and evaluation of beliefs, (2) that these individual differences do not reduce to the personal importance attached to rationality, and (3) that individual differences in moralized rationality have important motivational and interpersonal consequences.

The article is here.

Wednesday, December 7, 2016

Moralized Rationality: Relying on Logic and Evidence in the Formation and Evaluation of Belief Can Be Seen as a Moral Issue

Tomas Ståhl, Maarten P. Zaal, Linda J. Skitka
PLOS One
Published: November 16, 2016

Abstract

In the present article we demonstrate stable individual differences in the extent to which a reliance on logic and evidence in the formation and evaluation of beliefs is perceived as a moral virtue, and a reliance on less rational processes is perceived as a vice. We refer to this individual difference variable as moralized rationality. Eight studies are reported in which an instrument to measure individual differences in moralized rationality is validated. Results show that the Moralized Rationality Scale (MRS) is internally consistent, and captures something distinct from the personal importance people attach to being rational (Studies 1–3). Furthermore, the MRS has high test-retest reliability (Study 4), is conceptually distinct from frequently used measures of individual differences in moral values, and it is negatively related to common beliefs that are not supported by scientific evidence (Study 5). We further demonstrate that the MRS predicts morally laden reactions, such as a desire for punishment, of people who rely on irrational (vs. rational) ways of forming and evaluating beliefs (Studies 6 and 7). Finally, we show that the MRS uniquely predicts motivation to contribute to a charity that works to prevent the spread of irrational beliefs (Study 8). We conclude that (1) there are stable individual differences in the extent to which people moralize a reliance on rationality in the formation and evaluation of beliefs, (2) that these individual differences do not reduce to the personal importance attached to rationality, and (3) that individual differences in moralized rationality have important motivational and interpersonal consequences.

Saturday, May 14, 2016

On the Source of Human Irrationality

Oaksford, Mike et al.
Trends in Cognitive Sciences , Volume 20 , Issue 5 , 336 - 344

Summary

Reasoning and decision making are error prone. This is often attributed to a fast, phylogenetically old System 1. It is striking, however, that perceptuo-motor decision making in humans and animals is rational. These results are consistent with perceptuo-motor strategies emerging in Bayesian brain theory that also appear in human data selection. People seem to have access, although limited, to unconscious generative models that can generalise to explain other verbal reasoning results. Error does not emerge predominantly from System 1, but rather seems to emerge from the later evolved System 2 that involves working memory and language. However language also sows the seeds of error correction by moving reasoning into the social domain. This reversal of roles suggests key areas of theoretical integration and new empirical directions.

Trends

System 1 is supposedly the main cause of human irrationality. However, recent work on animal decision making, human perceptuo-motor decision making, and logical intuitions shows that this phylogenetically older system is rational.

Bayesian brain theory has recently proposed perceptuo-motor strategies identical to strategies proposed in Bayesian approaches to conscious verbal reasoning, suggesting that similar generative models are available at both levels.

Recent approaches to conditional inference using causal Bayes nets confirm this account, which can also generalise to logical intuitions.

People have only imperfect access to System 1. Errors arise from inadequate interrogation of System 1, working memory limitations, and mis-description of our records of these interrogations. However, there is evidence that such errors may be corrected by moving reasoning to the social domain facilitated by language.

The article is here.

Sunday, March 6, 2016

The Unbearable Asymmetry of Bullshit

By Brian Earp
BMJ Blogs
Originally posted February 16, 2016

Introduction

Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.

Scientists are people too

In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.

The blog post is here.

Sunday, February 21, 2016

Epistemology, Communication and Divine Command Theory

By John Danaher
Philosophical Disquisitions
Originally posted July 21, 2015

I have written about the epistemological objection to divine command theory (DCT) on a previous occasion. It goes a little something like this: According to proponents of the DCT, at least some moral statuses (like the fact that X is forbidden, or that X is bad) depend for their existence on God’s commands. In other words, without God’s commands those moral statuses would not exist. It would seem to follow that in order for anyone to know whether X is forbidden/bad (or whatever), they would need to have epistemic access to God’s commands. That is to say, they would need to know that God has commanded X to be forbidden/bad. The problem is that there is a certain class of non-believers — so-called ‘reasonable non-believers’ — who don’t violate any epistemic duties in their non-belief. Consequently, they lack epistemic access to God’s commands without being blameworthy for lacking this access. For them, X cannot be forbidden or bad.

This has been termed the ‘epistemological objection’ to DCT, and I will stick with that name throughout, but it may be a bit of a misnomer. This objection is not just about moral epistemology; it is also about moral ontology. It highlights the fact that at least some DCTs include a (seemingly) epistemic condition in their account of moral ontology. Consequently, if that condition is violated it implies that certain moral facts cease to exist (for at least some people). This is a subtle but important point: the epistemological objection does have ontological implications.

The blog post is here.

Tuesday, December 10, 2013

Could a brain scan diagnose you as a psychopath?

A US neuroscientist claims he has found evidence of psychopathy in his own brain activity

By Chris Chamber
The Guardian
Originally published November 25, 2013

Here is an excerpt:

This isn’t the first time we’ve heard from Fallon. In addition to the fact that his claims haven't been published in peer-reviewed journals, here are three reasons why we should take what he says with a handful of salt.

One of the most obvious mistakes in Fallon’s reasoning is called the fallacy of reverse inference. His argument goes like this: areas of the brain called the ventromedial prefrontal cortex and orbitofrontal cortex are important for empathy and moral reasoning. At the same time, empathy and moral reasoning are lost or impaired in many psychopaths. So, people who show reduced activity in these regions must be psychopaths.

The flaw with this argument – as Fallon himself must know – is that there is no one-to-one mapping between activity in a given brain region and complex abilities such as empathy. There is no empathy region and there is no psychopath switch. If you think of the brain as a toolkit, these parts of the brain aren’t like hammers or screwdrivers that perform only one task. They’re more like Swiss army knives that have evolved to support a range of different abilities. And just as a Swiss army knife isn’t only a bottle opener, the ventromedial prefrontal cortex isn’t only associated with empathy and moral judgements. It is also involved in decision-making, sensitivity to reward, memory, and predicting the future.

The entire article is here.