Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Explanation. Show all posts
Showing posts with label Explanation. Show all posts

Wednesday, July 22, 2020

Inference from explanation.

Kirfel, L., Icard, T., & Gerstenberg, T.
(2020, May 22).
https://doi.org/10.31234/osf.io/x5mqc

Abstract

What do we learn from a causal explanation? Upon being told that "The fire occurred because a lit match was dropped", we learn that both of these events occurred, and that there is a causal relationship between them. However, causal explanations of the kind "E because C" typically disclose much more than what is explicitly stated. Here, we offer a communication-theoretic account of causal explanations and show specifically that explanations can provide information about the extent to which a cited cause is normal or abnormal, and about the causal structure of the situation. In Experiment 1, we demonstrate that people infer the normality of a cause from an explanation when they know the underlying causal structure. In Experiment 2, we show that people infer the causal structure from an explanation if they know the normality of the cited cause. We find these patterns both for scenarios that manipulate the statistical and prescriptive normality of events. Finally, we consider how the communicative function of explanations, as highlighted in this series of experiments, may help to elucidate the distinctive roles that normality and causal structure play in causal explanation.

Conclusion

In this paper, we investigate the communicative dimensions of explanation, revealing some of the rich and subtle inferences people draw from them. We find that people are able to infer additional information from a causal explanation beyond what was explicitly communicated, such as causal structure and normality of the causes.  Our studies show that people make these inferences in part by appeal to what they themselves would judge reasonable to say across different possible scenarios. The overall pattern of judgments and inferences brings us closer to a full understanding of how causal explanations function inhuman discourse and behavior, while also raising new questions concerning the prominent role of norms in causal judgment and the function of causal explanation more broadly.

Editor's Note: This research has significant implications for psychotherapy.


Sunday, March 3, 2019

When and why people think beliefs are “debunked” by scientific explanations for their origins

Dillon Plunkett, Lara Buchak, and Tania Lombrozo

Abstract

How do scientific explanations for beliefs affect people’s confidence in those beliefs? For example, do people think neuroscientific explanations for religious belief support or challenge belief in God? In five experiments, we find that the effects of scientific explanations for belief depend on whether the explanations imply normal or abnormal functioning (e.g., if a neural mechanism is doing what it evolved to do). Experiments 1 and 2 find that people think brain based explanations for religious, moral, and scientific beliefs corroborate those beliefs when the explanations invoke a normally functioning mechanism, but not an abnormally functioning mechanism. Experiment 3 demonstrates comparable effects for other kinds of scientific explanations (e.g., genetic explanations). Experiment 4 confirms that these effects derive from (im)proper functioning, not statistical (in)frequency. Experiment 5 suggests that these effects interact with people’s prior beliefs to produce motivated judgments: People are more skeptical of scientific explanations for their own beliefs if the explanations appeal to abnormal functioning, but they are less skeptical of scientific explanations of opposing beliefs if the explanations appeal to abnormal functioning. These findings suggest that people treat “normality” as a proxy for epistemic reliability and reveal that folk epistemic commitments shape attitudes towards scientific explanations.

The research is here.

Wednesday, January 18, 2017

Rational judges, not extraneous factors in decisions

Tom Stafford
Mind Hacks
Originally published December 8, 2016

Here is an excerpt:

The main analysis works like this: we know that favourable rulings take longer than unfavourable ones (~7 mins vs ~5 mins), and we assume that judges are able to guess how long a case will take to rule on before they begin it (from clues like the thickness of the file, the types of request made, the representation the prisoner has and so on). Finally, we assume judges have a time limit in mind for each of the three sessions of the day, and will avoid starting cases which they estimate will overrun the time limit for the current session.

It turns out that this kind of rational time-management is sufficient to  generate the drops in favourable outcomes. How this occurs isn’t straightforward and interacts with a quirk of original author’s data presentation (specifically their graph shows the order number of cases when the number of cases in each session varied day to day – so, for example, it shows that the 12th case after a break is least likely to be judged favourably, but there wasn’t always a 12 case in each session. So sessions in which there were more unfavourable cases were more likely to contribute to this data point).

The article is here.

Thursday, August 11, 2016

Why Do People Tend to Infer “Ought” From “Is”? The Role of Biases in Explanation

Christina M. Tworek and Andrei Cimpian
Psychological Science July 8, 2016

Abstract

People tend to judge what is typical as also good and appropriate—as what ought to be. What accounts for the prevalence of these judgments, given that their validity is at best uncertain? We hypothesized that the tendency to reason from “is” to “ought” is due in part to a systematic bias in people’s (nonmoral) explanations, whereby regularities (e.g., giving roses on Valentine’s Day) are explained predominantly via inherent or intrinsic facts (e.g., roses are beautiful). In turn, these inherence-biased explanations lead to value-laden downstream conclusions (e.g., it is good
to give roses). Consistent with this proposal, results from five studies (N = 629 children and adults) suggested that, from an early age, the bias toward inherence in explanations fosters inferences that imbue observed reality with value.  Given that explanations fundamentally determine how people understand the world, the bias toward inherence in these judgments is likely to exert substantial influence over sociomoral understanding.

The article is here.

Sunday, November 8, 2015

Deconstructing the seductive allure of neuroscience explanations

Weisberg DS, Keil FC, Goodstein J, Rawson E, Gray JR.
Judgment and Decision Making, Vol. 10, No. 5, 
September 2015, pp. 429–441

Abstract

Explanations of psychological phenomena seem to generate more public interest when they contain neuroscientific information. Even irrelevant neuroscience information in an explanation of a psychological phenomenon may interfere with people's abilities to critically consider the underlying logic of this explanation. We tested this hypothesis by giving naïve adults, students in a neuroscience course, and neuroscience experts brief descriptions of psychological phenomena followed by one of four types of explanation, according to a 2 (good explanation vs. bad explanation) x 2 (without neuroscience vs. with neuroscience) design. Crucially, the neuroscience information was irrelevant to the logic of the explanation, as confirmed by the expert subjects. Subjects in all three groups judged good explanations as more satisfying than bad ones. But subjects in the two nonexpert groups additionally judged that explanations with logically irrelevant neuroscience information were more satisfying than explanations without. The neuroscience information had a particularly striking effect on nonexperts' judgments of bad explanations, masking otherwise salient problems in these explanations.

The entire article is here.