Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Reasoning. Show all posts
Showing posts with label Reasoning. Show all posts

Friday, October 6, 2023

Taking the moral high ground: Deontological and absolutist moral dilemma judgments convey self-righteousness

Weiss, A., Burgmer, P., Rom, S. C., & Conway, P. (2024). 
Journal of Experimental Social Psychology, 110, 104505.

Abstract

Individuals who reject sacrificial harm to maximize overall outcomes, consistent with deontological (vs. utilitarian) ethics, appear warmer, more moral, and more trustworthy. Yet, deontological judgments may not only convey emotional reactions, but also strict adherence to moral rules. We therefore hypothesized that people view deontologists as more morally absolutist and hence self-righteous—as perceiving themselves as morally superior. In addition, both deontologists and utilitarians who base their decisions on rules (vs. emotions) should appear more self-righteous. Four studies (N = 1254) tested these hypotheses. Participants perceived targets as more self-righteous when they rejected (vs. accepted) sacrificial harm in classic moral dilemmas where harm maximizes outcomes (i.e., deontological vs. utilitarian judgments), but not parallel cases where harm fails to maximize outcomes (Study 1). Preregistered Study 2 replicated the focal effect, additionally indicating mediation via perceptions of moral absolutism. Study 3 found that targets who reported basing their deontological judgments on rules, compared to emotional reactions or when processing information was absent, appeared particularly self-righteous. Preregistered Study 4 included both deontological and utilitarian targets and manipulated whether their judgments were based on rules versus emotion (specifically sadness). Grounding either moral position in rules conveyed self-righteousness, while communicating emotions was a remedy. Furthermore, participants perceived targets as more self-righteous the more targets deviated from their own moral beliefs. Studies 3 and 4 additionally examined participants' self-disclosure intentions. In sum, deontological dilemma judgments may convey an absolutist, rule-focused view of morality, but any judgment stemming from rules (in contrast to sadness) promotes self-righteousness perceptions.


My quick take:

The authors also found that people were more likely to perceive deontologists as self-righteous if they based their judgments on rules rather than emotions. This suggests that it is not just the deontological judgment itself that leads to perceptions of self-righteousness, but also the way in which the judgment is made.

Overall, the findings of this study suggest that people who make deontological judgments in moral dilemmas are more likely to be perceived as self-righteous. This is because deontological judgments are often seen as reflecting a rigid and absolutist view of morality, which can come across as arrogant or condescending.

It is important to note that the findings of this study do not mean that all deontologists are self-righteous. However, the study does suggest that people should be aware of how their moral judgments may be perceived by others. If you want to avoid being perceived as self-righteous, it may be helpful to explain your reasons for making a deontological judgment, and to acknowledge the emotional impact of the situation.

Tuesday, October 3, 2023

Emergent analogical reasoning in large language models

Webb, T., Holyoak, K.J. & Lu, H. 
Nat Hum Behav (2023).
https://doi.org/10.1038/s41562-023-01659-w

Abstract

The recent advent of large language models has reinvigorated debate over whether human cognitive capacities might emerge in such generic models given sufficient training data. Of particular interest is the ability of these models to reason about novel problems zero-shot, without any direct training. In human cognition, this capacity is closely tied to an ability to reason by analogy. Here we performed a direct comparison between human reasoners and a large language model (the text-davinci-003 variant of Generative Pre-trained Transformer (GPT)-3) on a range of analogical tasks, including a non-visual matrix reasoning task based on the rule structure of Raven’s Standard Progressive Matrices. We found that GPT-3 displayed a surprisingly strong capacity for abstract pattern induction, matching or even surpassing human capabilities in most settings; preliminary tests of GPT-4 indicated even better performance. Our results indicate that large language models such as GPT-3 have acquired an emergent ability to find zero-shot solutions to a broad range of analogy problems.

Discussion

We have presented an extensive evaluation of analogical reasoning in a state-of-the-art large language model. We found that GPT-3 appears to display an emergent ability to reason by analogy, matching or surpassing human performance across a wide range of problem types. These included a novel text-based problem set (Digit Matrices) modeled closely on Raven’s Progressive Matrices, where GPT-3 both outperformed human participants, and captured a number of specific signatures of human behavior across problem types. Because we developed the Digit Matrix task specifically for this evaluation, we can be sure GPT-3 had never been exposed to problems of this type, and therefore was performing zero-shot reasoning. GPT-3 also displayed an ability to solve analogies based on more meaningful relations, including four-term verbal analogies and analogies between stories about naturalistic problems.

It is certainly not the case that GPT-3 mimics human analogical reasoning in all respects. Its performance is limited to the processing of information provided in its local context. Unlike humans, GPT-3 does not have long-term memory for specific episodes. It is therefore unable to search for previously-encountered situations that might create useful analogies with a current problem. For example, GPT-3 can use the general story to guide its solution to the radiation problem, but as soon as its context buffer is emptied, it reverts to giving its non-analogical solution to the problem – the system has learned nothing from processing the analogy. GPT-3’s reasoning ability is also limited by its lack of physical understanding of the world, as evidenced by its failure (in comparison with human children) to use an analogy to solve a transfer problem involving construction and use of simple tools. GPT-3’s difficulty with this task is likely due at least in part to its purely text-based input, lacking the multimodal experience necessary to build a more integrated world model.

But despite these major caveats, our evaluation reveals that GPT-3 exhibits a very general capacity to identify and generalize – in zero-shot fashion – relational patterns to be found within both formal problems and meaningful texts. These results are extremely surprising. It is commonly held that although neural networks can achieve a high level of performance within a narrowly-deļ¬ned task domain, they cannot robustly generalize what they learn to new problems in the way that human learners do. Analogical reasoning is typically viewed as a quintessential example of this human capacity for abstraction and generalization, allowing human reasoners to intelligently approach novel problems zero-shot.

Thursday, August 24, 2023

The Limits of Informed Consent for an Overwhelmed Patient: Clinicians’ Role in Protecting Patients and Preventing Overwhelm

J. Bester, C.M. Cole, & E. Kodish.
AMA J Ethics. 2016;18(9):869-886.
doi: 10.1001/journalofethics.2016.18.9.peer2-1609.

Abstract

In this paper, we examine the limits of informed consent with particular focus on ways in which various factors can overwhelm decision-making capacity. We introduce overwhelm as a phenomenon commonly experienced by patients in clinical settings and distinguish between emotional overwhelm and informational overload. We argue that in these situations, a clinician’s primary duty is prevention of harm and suggest ways in which clinicians can discharge this obligation. To illustrate our argument, we consider the clinical application of genetic sequencing testing, which involves scientific and technical information that can compromise the understanding and decisional capacity of most patients. Finally, we consider and rebut objections that this could lead to paternalism.

(cut)

Overwhelm and Information Overload

The claim we defend is a simple one: there are medical situations in which the information involved in making a decision is of such a nature that the decision-making capacity of a patient is overwhelmed by the sheer complexity or volume of information at hand. In such cases a patient cannot attain the understanding necessary for informed decision making, and informed consent is therefore not possible. We will support our thesis regarding informational overload by focusing specifically on the area of clinical whole genome sequencing—i.e., identification of an individual’s entire genome, enabling the identification and interaction of multiple genetic variants—as distinct from genetic testing, which tests for specific genetic variants.

We will first present ethical considerations regarding informed consent. Next, we will present three sets of factors that can burden the capacity of a patient to provide informed consent for a specific decision—patient, communication, and information factors—and argue that these factors may in some circumstances make it impossible for a patient to provide informed consent. We will then discuss emotional overwhelm and informational overload and consider how being overwhelmed affects informed consent. Our interest in this essay is mainly in informational overload; we will therefore consider whole genome sequencing as an example in which informational factors overwhelm a patient’s decision-making capacity. Finally, we will offer suggestions as to how the duty to protect patients from harm can be discharged when informed consent is not possible because of emotional overwhelm or informational overload.

(cut)

How should clinicians respond to such situations?

Surrogate decision making. One possible solution to the problem of informed consent when decisional capacity is compromised is to seek a surrogate decision maker. However, in situations of informational overload, this may not solve the problem. If the information has inherent qualities that would overwhelm a reasonable patient, it is likely to also overwhelm a surrogate. Unless the surrogate decision maker is a content expert who also understands the values of the patient, a surrogate decision maker will not solve the problem of informed consent. Surrogate decision making may, however, be useful for the emotionally overwhelmed patient who remains unable to provide informed consent despite additional support.

Shared decision making. Another possible solution is to make use of shared decision making (SDM). This approach relies on deliberation between clinician and patient regarding available health care choices, taking the best evidence into account. The clinician actively involves the patient and elicits patient values. The goal of SDM is often stated as helping patients arrive at informed decisions that respect what matters most to them.

It is not clear, however, that SDM will be successful in facilitating informed decisions when an informed consent process has failed. SDM as a tool for informed decision making is at its core dependent on the patient understanding the options presented and being able to describe the preferred option. Understanding and deliberating about what is at stake for each option is a key component of this use of SDM. Therefore, if the medical information is so complex that it overloads the patient’s decision-making capacity, SDM is unlikely to achieve informed decision making. But if a patient is emotionally overwhelmed by the illness experience and all that accompanies it, a process of SDM and support for the patient may eventually facilitate informed decision making.

Saturday, May 6, 2023

How Smart People Can Stop Being Miserable

Arthur C. Brooks
The Atlantic
Originally posted 23MAR 23

Here are some excerpts:

“Happiness in intelligent people is the rarest thing I know,” an unnamed character casually remarks in Ernest Hemingway’s novel The Garden of Eden. You might say that this is a corollary of the much more famous “Ignorance is bliss.”

The latter recalls phenomena such as:
  • the Dunning-Kruger effect—in which people lacking skills and knowledge in a particular area innocently underestimate their own incompetence—and 
  • the illusion of explanatory depth, which can prompt autodidacts on social media to excitedly present complex scientific phenomena, thinking they understand them in far greater depth than they really do.
The Hemingway hypothesis, however, is less straightforward. I can think of a lot of unhappy intellectuals, to be sure. But is intelligence per se their problem? Happiness scholars have studied this question, and the answer is—as in so many parts of life—it depends. The gifts you possess can lift you up or pull you down; it all depends on how you use them. Many people see intelligence as a way to get ahead of others. But to get happier, we need to do the opposite.

You might assume that intelligence—whether it be the conventional IQ kind, emotional intelligence, musical talent, or some other dimension along which a person can excel—raises happiness, all else being equal. After all, people with higher cognitive ability should logically have more exciting life opportunities than others. They should also acquire more resources with which to enhance their well-being.

In general, however, there is no correlation between general intelligence and life satisfaction at the individual level. That principle does mask a few wrinkles. In 2022, researchers at Weill Cornell Medicine and Fordham University looked at the association between well-being and various building blocks of neurocognitive ability: memory, processing speed, reasoning, spatial visualization, and vocabulary. The only components of intelligence that they found to be positively related to happiness were spatial visualization, memory, and processing speed—but those relationships were fleeting and age-related.

More interesting, the researchers also found a strongly negative association between happiness and vocabulary. To explain this, they offered a hypothesis: People with a large vocabulary “self-select more challenging environments, and as a result may encounter more daily stressors and reduced positive affect.” In other words, loquacious logophiles might have byzantine lives and find themselves in manifold precarious situations that lower their jouissance. (They talk themselves into misery.)

(cut)

I think there is a clear reason that something as valuable as intelligence, especially manifested in one’s ability to communicate, doesn’t necessarily lead to a higher quality of life.

One of life’s cruelest mysteries is why we are impelled to pursue rewards that bring success, but not happiness. Mother Nature drives us toward the four goals of money, power, pleasure, and prestige with the promise that these rewards will bring happiness. In truth, the correlation might be positive, but the causation is probably reversed: Happier people naturally get these rewards. But seek them for their own sake, for your own gain, and happiness will likely fall. Accordingly, if you aspire to use your cleverness for personal benefit—for the praise and admiration of others, or an advantage in work and dating—woe be unto you.

The smarter you are, the better equipped you should be to understand that well-being comes from faith, family, friendship, and work that serves others. Your intelligence is more likely to bring you happiness if you put it to use by chasing better ways to love and serve others, rather than elbowing others aside and hoarding worldly rewards.

In some ways, you can think of intelligence as a resource just like money or power. We know how to make the latter two sources of joy: Share them with others, and use them as a force for good in the world. To make smarts a fount of happiness, too, we can follow the same guide. Here are a couple of tangible proposals.

Sunday, October 23, 2022

Advancing theorizing about fast-and-slow thinking

De Neys, W. (2022). 
Behavioral and Brain Sciences, 1-68. 
doi:10.1017/S0140525X2200142X

Abstract

Human reasoning is often conceived as an interplay between a more intuitive and deliberate thought process. In the last 50 years, influential fast-and-slow dual process models that capitalize on this distinction have been used to account for numerous phenomena—from logical reasoning biases, over prosocial behavior, to moral decision-making. The present paper clarifies that despite the popularity, critical assumptions are poorly conceived. My critique focuses on two interconnected foundational issues: the exclusivity and switch feature. The exclusivity feature refers to the tendency to conceive intuition and deliberation as generating unique responses such that one type of response is assumed to be beyond the capability of the fast-intuitive processing mode. I review the empirical evidence in key fields and show that there is no solid ground for such exclusivity. The switch feature concerns the mechanism by which a reasoner can decide to shift between more intuitive and deliberate processing. I present an overview of leading switch accounts and show that they are conceptually problematic—precisely because they presuppose exclusivity. I build on these insights to sketch the groundwork for a more viable dual process architecture and illustrate how it can set a new research agenda to advance the field in the coming years.

Conclusion

In the last 50 years dual process models of thinking have moved to the center stage in research on human reasoning. These models have been instrumental for the initial exploration of human thinking in the cognitive sciences and related fields (Chater, 2018; De Neys, 2021). However, it is time to rethink foundational assumptions. Traditional dual process models have typically conceived intuition and deliberation as generating unique responses such that one type of response is exclusively tied to deliberation and is assumed to be beyond the reach of the intuitive system. I reviewed empirical evidence from key dual process applications that argued against this exclusivity feature. I also showed how exclusivity leads to conceptual complications when trying to explain how a reasoner switches between intuitive and deliberate reasoning. To avoid these complications, I sketched an elementary non-exclusive working model in which it is the activation strength of competing intuitions within System 1 that determines System 2 engagement. 

It will be clear that the working model is a starting point that will need to be further developed and specified. However, by avoiding the conceptual paradoxes that plague the traditional model, it presents a more viable basic architecture that can serve as theoretical groundwork to build future dual process models in various fields. In addition, it should at the very least force dual process theorists to specify more explicitly how they address the switch issue. In the absence of such specification, dual process models might continue to provide an appealing narrative but will do little to advance our understanding of the interaction between intuitive and deliberate— fast and slow—thinking. It is in this sense that I hope that the present paper can help to sketch the building blocks of a more judicious dual process future. 

Monday, October 17, 2022

The Psychological Origins of Conspiracy Theory Beliefs: Big Events with Small Causes Amplify Conspiratorial Thinking

Vonasch, A., Dore, N., & Felicite, J.
(2022, January 20). 
https://doi.org/10.31234/osf.io/3j9xg

Abstract

Three studies supported a new model of conspiracy theory belief: People are most likely to believe conspiracy theories that explain big, socially important events with smaller, intuitively unappealing official explanations. Two experiments (N = 577) used vignettes about fictional conspiracy theories and measured online participants’ beliefs in the official causes of the events and the corresponding conspiracy theories. We experimentally manipulated the size of the event and its official cause. Larger events and small official causes decreased belief in the official cause and this mediated increased belief in the conspiracy theory, even after controlling for individual differences in paranoia and distrust. Study 3 established external validity and generalizability by coding the 78 most popular conspiracy theories on Reddit. Nearly all (96.7%) popular conspiracy theories explain big, socially important events with smaller, intuitively unappealing official explanations. By contrast, events not producing conspiracy theories often have bigger explanations.

General Discussion

Three studies supported the HOSE (heuristic of sufficient explanation) of conspiracy theory belief. Nearly all popular conspiracy theories sampled were about major events with small official causes deemed too small to sufficiently explain the event. Two experiments involving invented conspiracy theories supported the proposed causal mechanism. People were less likely to believe the official explanation was true because it was relatively small and the event was relatively big. People’s beliefs in the conspiracy theory were mediated by their disbelief in the official explanation. Thus, one reason people believe conspiracy theories is because they offer a bigger explanation for a seemingly implausibly large effect of a small cause.

HOSE helps explain why certain conspiracy theories become popular but others do not. Like evolutionarily fit genes are especially likely to spread to subsequent generations, ideas (memes) with certain qualities are most likely to spread and thus become popular (Dawkins, 1976). HOSE explains that conspiracy theories spread widely because people are strongly motivated to learn an explanation for important events (Douglas, et al., 2017; 2019), and are usually unsatisfied with counterintuitively small explanations that seem insufficient to explain things. Conspiracy theories are typically inspired by events that people perceive to be larger than their causes could plausibly produce. Some conspiracy theories may be inevitable because small causes do sometimes counterintuitively cause big events: via the exponential spread of a microscopic virus or the interconnected, chaotic nature of events like the flap of a butterfly’s wings changing weather across the world (Gleick, 2008). Therefore, itmay be impossible to prevent all conspiracy theories from developing.

Sunday, October 16, 2022

A framework for understanding reasoning errors: From fake news to climate change and beyond

Pennycook, G. (2022, August 31).
https://doi.org/10.31234/osf.io/j3w7d

Abstract

Humans have the capacity, but perhaps not always the willingness, for great intelligence. From global warming to the spread of misinformation and beyond, our species is facing several major challenges that are the result of the limits of our own reasoning and decision-making. So, why are we so prone to errors during reasoning? In this chapter, I will outline a framework for understanding reasoning errors that is based on a three-stage dual-process model of analytic engagement (intuition, metacognition, and reason). The model has two key implications: 1) That a mere lack of deliberation and analytic thinking is a primary source of errors and 2) That when deliberation is activated, it generally reduces errors (via questioning intuitions and integrating new information) than increasing errors (via rationalization and motivated reasoning). In support of these claims, I review research showing the extensive predictive validity of measures that index individual differences in analytic cognitive style – even beyond explicit errors per se. In particular, analytic thinking is not only predictive of skepticism about a wide range of epistemically suspect beliefs (paranormal, conspiratorial, COVID-19 misperceptions, pseudoscience and alternative medicines) as well as decreased susceptibility to bullshit, fake news, and misinformation, but also important differences in people’s moral judgments and values as well as their religious beliefs (and disbeliefs). Furthermore, in some (but not all cases), there is evidence from experimental paradigms that support a causal role of analytic thinking in determining judgments, beliefs, and behaviors. The findings reviewed here provide some reason for optimism for the future: It may be possible to foster analytic thinking and therefore improve the quality of our decisions.

Evaluating the evidence: Does reason matter?

Thus far, I have prioritized explaining the various alternative frameworks. I will now turn to an in-depth review of some of the key relevant evidence that helps mediate between these accounts. I will organize this review around two key implications that emerge from the framework that I have proposed.

First, the primary difference between the three-stage model (and related dual-process models) and the social-intuitionist models (and related intuitionist models) is that the former argues that people should be able to overcome intuitive errors using deliberation whereas the latter argues that reason is generally infirm and therefore that intuitive errors will simply dominate. Thus, the reviewed research will investigate the apparent role of deliberation in driving people’s choices, beliefs, and behaviors.

Second, the primary difference between the three-stage model (and related dual-process models) and the identity-protective cognition model is that the latter argues that deliberation facilitates biased information processing whereas the former argues that deliberation generally facilitates accuracy. Thus, the reviewed research will also focus on whether deliberation is linked with inaccuracy in politically-charged or identity-relevant contexts.

Wednesday, November 3, 2021

Maybe a free thinker but not a critical one: High conspiracy belief is associated with low critical thinking ability

Lantian, A., Bagneux, V., DelouvĆ©e, S., 
& Gauvrit, N. (2020, February 7). 
Applied Cognitive Psychology
https://doi.org/10.31234/osf.io/8qhx4

Abstract

Critical thinking is of paramount importance in our society. People regularly assume that critical thinking is a way to reduce conspiracy belief, although the relationship between critical thinking and conspiracy belief has never been tested. We conducted two studies (Study 1, N = 86; Study 2, N = 252), in which we found that critical thinking ability—measured by an open-ended test emphasizing several areas of critical thinking ability in the context of argumentation—is negatively associated with belief in conspiracy theories. Additionally, we did not find a significant relationship between self-reported (subjective) critical thinking ability and conspiracy belief. Our results support the idea that conspiracy believers have less developed critical thinking ability and stimulate discussion about the possibility of reducing conspiracy beliefs via the development of critical thinking.

From the General Discussion

The presumed role of critical thinking in belief in conspiracy theories is continuously discussed by researchers, journalists, and by lay people on social networks. One example is the capacity to exercise critical thinking ability to distinguish bogus conspiracy theories from genuine conspiracy theories (Bale, 2007), leading us to question when critical thinking ability could be used to support this adaptive function. Sometimes, it is not unreasonable to think that a form of rationality would help to facilitate the detection of dangerous coalitions (van Prooijen & Van Vugt, 2018). In that respect, Stojanov and Halberstadt (2019) recently introduced a distinction between irrational versus rational suspicion. Although the former focuses on the general tendency to believe in any conspiracy theories, the later focus on higher sensitivity to deception or corruption, which is defined as“healthy skepticism.” These two aspects of suspicion can now be handled simultaneously thanks to a new scale developed by Stojanov and Halberstadt (2019). In our study, we found that critical thinking ability was associated with lower unfounded belief in conspiracy theories, but this does not answer the question as to whether critical thinking ability can be helpful for the detection of true conspiracies. Future studies could use this new measurement to address this specific question.

Tuesday, September 14, 2021

Reconstructing the Einstellung effect

Binz, M., & Schulz, E. (2021, August 10).
https://doi.org/10.31234/osf.io/yhcf4

Abstract

The Einstellung effect was first described by Abraham Luchins in his doctoral thesis published in 1942. The effect occurs when a repeated solution to old problems is applied to a new problem even though a more appropriate response is available. In Luchins' so-called water jar task, participants had to measure a specific amount of water using three jars of different capacities. Luchins found that subjects kept using methods they had applied in previous trials, even if a more efficient solution for the current trial was available: an Einstellung effect. Moreover, Luchins studied the different conditions that could possibly mediate this effect, including telling participants to pay more attention, changing the number of tasks, alternating between different types of tasks, as well as putting participants under time pressure. In the current work, we reconstruct and reanalyze the data of the various experimental conditions published in Luchins' thesis. We furthermore show that a model of resource-rational decision-making can explain all of the observed effects. This model assumes that people transform prior preferences into a posterior policy to maximize rewards under time constraints. Taken together, our reconstructive and modeling results put the Einstellung effect under the lens of modern-day psychology and show how resource-rational models can explain effects that have historically been seen as deficiencies of human problem-solving.

From the Discussion

It is typically assumed that the best solution for any particular problem is necessarily the shortest, and thus previous research has largely characterized the Einstellung effect as maladaptive behavior.  In the present paper, we have challenged this assumption and provided a resource-rational interpretation of the effect. We did so with the help of an information-theoretic model of decision-making.  The central premise of this  model is to transform prior preferences into posterior policies in a way that trade of expected utility with the time it takes to make a decision. The resulting model incorporates three basic principles: (1) people prefer simple solutions, i.e.,they attempt to spend as little physical effort as possible, (2) they avoid costly computations, i.e., those that require high mental effort, and (3) they adapt to their environment,  i.e., they learn about statistics of the problem they interact with.We found that these simple principles are sufficient to capture the rich characteristics found in Luchins’ data. An additional ablation analysis  confirmed  that  all  of  these  principles  are necessary to reproduce the entire set of phenomena reported in Luchins’ thesis.

Thursday, September 9, 2021

Neurodualism: People Assume that the Brain Affects the Mind more than the Mind Affects the Brain

Valtonen, J., Ahn, W., & Cimpian, A.
Cognitive Science

Abstract

People commonly think of the mind and the brain as distinct entities that interact, a view known as dualism.  At the same time, the public widely acknowledges that science attributes all mental phenomena to the workings of a material brain, a view at odds with dualism. How do people reconcile these conflicting perspectives? We propose that people distort claims about the brain from the wider culture to fit their dualist belief that minds and brains are distinct, interacting entities: Exposure to cultural discourse about the brain as the physical basis for the mind prompts people to posit that mind–brain interactions are asymmetric, such that the brain is able to affect the mind more than vice versa. We term this hybrid intuitive theory neurodualism. Five studies involving both thought experiments and naturalistic scenarios provided evidence of neurodualism among laypeople and, to some extent, even practicing psychotherapists. For example, lay participants reported that “a change in a person’s brain” is accompanied by “a change in the person’s mind” more often than vice versa. Similarly, when asked to imagine that “future scientists were able to alter exactly 25% of a person’s brain,” participants reported larger corresponding changes in the person’s mind than in the opposite direction. Participants also showed a similarly asymmetric pattern favoring the brain over the mind in naturalistic scenarios.  By uncovering people’s intuitive theories of the mind–brain relation, the results provide insights into societal phenomena such as the allure of neuroscience and common misperceptions of mental health treatments.

From the General Discussion

In all experiments and across several different tasks involving both thought experiments and naturalistic scenarios, untrained participants believed that interventions acting on the brain would affect the mind more than interventions acting on the mind would affect the brain, supporting our proposal. This causal asymmetry was strong and replicated reliably with untrained participants. Moreover, the extent to which participants endorsed popular dualism was only weakly correlated with their endorsement of neurodualism, supporting our proposal that a more complex set of beliefs is involved. In the last study, professional psychotherapists also showed evidence of endorsing neurodualism—albeit to a weaker degree—despite their scientific training and their stronger reluctance, relative to lay participants, to believe that psychiatric medications affect the mind.

Our results both corroborate and extend prior findings regarding intuitive reasoning about minds and brains. Our results corroborate prior findings by showing, once again, that both lay people and trained mental health professionals commonly hold dualistic beliefs. If their reasoning had been based on (folk versions of) a physicalist model such as identity theory or supervenience, participants should not have expected mental events to occur in the absence of neural events. However, both lay participants and professional psychotherapists did consistently report that mental changes can occur (at least sometimes) even in situations in which no neural changes occur. (Underline inserted for emphasis.)

Sunday, April 25, 2021

Training for Wisdom: The Distanced-Self-Reflection Diary Method

Grossmann, I., et al.  (2019, May 8). 
Psychological Science. 2021;32(3):381-394. 
doi:10.1177/0956797620969170

Abstract

Two pre-registered longitudinal experiments (Study 1: Canadians/Study 2: Americans and Canadians; N=555) tested the utility of illeism—a practice of referring to oneself in the third person—during diary-reflection for the trainability of wisdom-related characteristics in everyday life: emotional complexity (Study 1) and wise reasoning (intellectual humility, open-mindedness about how situations could unfold, consideration of and attempts to integrate diverse viewpoints; Studies 1-2). In a month-long experiment, instruction to engage in third- (vs. first-) person diary-reflections on most significant daily experiences resulted in growth in wise reasoning and emotional complexity assessed in laboratory sessions after vs. before the intervention. Additionally, third- (vs. first-) person participants showed alignment between forecasted and month-later experienced feelings toward close others in challenging situations. Study 2 replicated the third-person self-reflections effect on wise reasoning (vs. first-person- and no-pronoun-controls) in a week-long intervention. The present research demonstrates a path to evidence-based training of wisdom-related processes.

General Discussion

Two interventions demonstrated the effectiveness of distanced self-reflection for promoting wiser reasoning about interpersonal challenges, relative to control conditions. The effect of using distanced self-reflection on wise reasoning was in part statistically accounted for by a corresponding broadening of people’s habitually narrow self-focus into a more expansive sense of self (Aron & Aron, 1997). Distanced self-reflection effects were particularly pronounced for intellectual humility and social-cognitive aspects of wise reasoning (i.e., acknowledgement of others’ perspectives, search for conflict resolution). This project provides the first evidence that wisdom-related cognitive processes can be fostered in daily life. The results suggest that distanced self-reflections in daily diaries may cultivate wiser reasoning about challenging social interactions by promoting spontaneous self-distancing (Ayduk & Kross, 2010).

Friday, June 19, 2020

Better Minds, Better Morals A Procedural Guide to Better Judgment

Schaefer GO, Savulescu J.
J Posthum Stud. 2017;1(1):26‐43.
doi:10.5325/jpoststud.1.1.0026

Abstract

Making more moral decisions - an uncontroversial goal, if ever there was one. But how to go about it? In this article, we offer a practical guide on ways to promote good judgment in our personal and professional lives. We will do this not by outlining what the good life consists in or which values we should accept.Rather, we offer a theory of procedural reliability: a set of dimensions of thought that are generally conducive to good moral reasoning. At the end of the day, we all have to decide for ourselves what is good and bad, right and wrong. The best way to ensure we make the right choices is to ensure the procedures we're employing are sound and reliable. We identify four broad categories of judgment to be targeted - cognitive, self-management, motivational and interpersonal. Specific factors within each category are further delineated, with a total of 14 factors to be discussed. For each, we will go through the reasons it generally leads to more morally reliable decision-making, how various thinkers have historically addressed the topic, and the insights of recent research that can offer new ways to promote good reasoning. The result is a wide-ranging survey that contains practical advice on how to make better choices. Finally, we relate this to the project of transhumanism and prudential decision-making. We argue that transhumans will employ better moral procedures like these. We also argue that the same virtues will enable us to take better control of our own lives, enhancing our responsibility and enabling us to lead better lives from the prudential perspective.

A pdf is here.

Tuesday, May 5, 2020

How stress influences our morality

Lucius Caviola and Nadira FaulmĆ¼ller
Oxford Martin School

Abstract

Several studies show that stress can influence moral judgment and behavior. In personal moral dilemmas—scenarios where someone has to be harmed by physical contact in order to save several others—participants under stress tend to make more deontological judgments than nonstressed participants, i.e. they agree less with harming someone for the greater good. Other studies demonstrate that stress can increase pro-social behavior for in-group members but decrease it for out-group members. The dual-process theory of moral judgment in combination with an evolutionary perspective on emotional reactions seems to explain these results: stress might inhibit controlled reasoning and trigger people’s automatic emotional intuitions. In other words, when it comes to morality, stress seems to make us prone to follow our gut reactions instead of our elaborate reasoning.

From the Implications Section

The conclusions drawn from these studies seem to raise an important question: if our moral judgments are so dependent on stress, which of our judgments should we rely on—the ones elicited by stress or the ones we come to after careful consideration? Most people would probably not regard a physiological reaction, such as stress, as a relevant normative factor that should have a qualified influence on our moral values. Instead, our reflective moral judgments seem to represent better what we really care about. This should make us suspicious of the normative validity of emotional intuitions in general. Thus, in order to identify our moral values, we should not blindly follow our gut reactions, but try to think more deliberately about what we care about.

For example, as stated we might be more prone to help a poor beggar on the street when we are stressed. Here, even after careful reflection we might come to the conclusion that this emotional reaction elicited by stress is the morally right thing to do after all. However, in other situations this might not be the case. As we have seen we are less prone to donate money to charity when stressed (cf. Vinkers et al., 2013). But is this reaction really in line with what we consider to be the morally right thing to do after careful reflection? After all, if we care about the well-being of the single beggar, why then should the many more people’s lives, potentially benefiting from our donation, count less?

The research is here.

Wednesday, January 15, 2020

How should we balance morality and the law?

Peter Koch
BCM Blogs
Originally posted 20 Dec 19

I was recently discussing a clinical case with medical students and physicians that involved balancing murky ethical issues and relevant laws. One participant leaned back and said: “Well, if we know the laws, then that’s the end of the story!”

The laws were clear about what ought to (legally) be done, but following the laws in this case would likely produce a bad outcome. We ended up divided about how to proceed with the case, but this discussion raised a bigger question: Exactly how much should we weigh the law in moral deliberations?

The basic distinction between the legal and moral is easy enough to identify. Most people agree that what is legal is not necessarily moral and what is immoral should not necessarily be illegal.

Slavery in the U.S. is commonly used as an example. “Of course,” a good modern citizen will say, “slavery was wrong even when it was legal.” The passing of the 13 amendment did not make slavery morally wrong; it was wrong already, and the legal structures finally caught up to the moral structures.

There are plenty of acts that are immoral but that should not be illegal. For example, perhaps it is immoral to gossip about your friend’s personal life, but most would agree that this sort of gossip should not be outlawed. The basic distinction between the legal and the moral appears to be simple enough.

Things get trickier, though, when we press more deeply into the matter.

The blog post is here.

Tuesday, January 7, 2020

AI Is Not Similar To Human Intelligence. Thinking So Could Be Dangerous

Elizabeth Fernandez
Artificial intelligenceforbes.com
Originally posted 30 Nov 19

Here is an excerpt:

No doubt, these algorithms are powerful, but to think that they “think” and “learn” in the same way as humans would be incorrect, Watson says. There are many differences, and he outlines three.

The first - DNNs are easy to fool. For example, imagine you have a picture of a banana. A neural network successfully classifies it as a banana. But it’s possible to create a generative adversarial network that can fool your DNN. By adding a slight amount of noise or another image besides the banana, your DNN might now think the picture of a banana is a toaster. A human could not be fooled by such a trick. Some argue that this is because DNNs can see things humans can’t, but Watson says, “This disconnect between biological and artificial neural networks suggests that the latter lack some crucial component essential to navigating the real world.”

Secondly, DNNs need an enormous amount of data to learn. An image classification DNN might need to “see” thousands of pictures of zebras to identify a zebra in an image. Give the same test to a toddler, and chances are s/he could identify a zebra, even one that’s partially obscured, by only seeing a picture of a zebra a few times. Humans are great “one-shot learners,” says Watson. Teaching a neural network, on the other hand, might be very difficult, especially in instances where data is hard to come by.

Thirdly, neural nets are “myopic”. They can see the trees, so to speak, but not the forest. For example, a DNN could successfully label a picture of Kim Kardashian as a woman, an entertainer, and a starlet. However, switching the position of her mouth and one of her eyes actually improved the confidence of the DNN’s prediction. The DNN didn’t see anything wrong with that image. Obviously, something is wrong here. Another example - a human can say “that cloud looks like a dog”, whereas a DNN would say that the cloud is a dog.

The info is here.

Monday, November 4, 2019

Principles of karmic accounting: How our intuitive moral sense balances rights and wrongs

Image result for karmic accountingJohnson, S. G. B., & Ahn, J.
(2019, September 10).
PsyArXiv
https://doi.org/10.31234/osf.io/xetwg

Abstract

We are all saints and sinners: Some of our actions benefit other people, while other actions harm people. How do people balance moral rights against moral wrongs when evaluating others’ actions? Across 9 studies, we contrast the predictions of three conceptions of intuitive morality—outcome- based (utilitarian), act-based (deontologist), and person-based (virtue ethics) approaches. Although good acts can partly offset bad acts—consistent with utilitarianism—they do so incompletely and in a manner relatively insensitive to magnitude, but sensitive to temporal order and the match between who is helped and harmed. Inferences about personal moral character best predicted blame judgments, explaining variance across items and across participants. However, there was modest evidence for both deontological and utilitarian processes too. These findings contribute to conversations about moral psychology and person perception, and may have policy implications.

Here is the beginning of the General Discussion:

Much  of  our  behavior  is  tinged  with  shades  of  morality. How  third-parties  judge  those behaviors has numerous social consequences: People judged as behaving immorally can be socially ostracized,  less  interpersonally  attractive,  and  less  able  to  take  advantage  of  win–win  agreements. Indeed, our desire to avoid ignominy and maintain our moral reputations motivate much of our social behavior. But on the other hand, moral judgment is subject to a variety of heuristics and biases that appear  to  violate  normative  moral  theories  and  lead  to  inconsistency  (Bartels,  Bauman,  Cushman, Pizarro, & McGraw, 2015; Sunstein, 2005).  Despite the dominating influence of moral judgment in everyday social cognition, little is known about how judgments of individual acts scale up into broader judgments  about  sequences  of  actions,  such  as  moral  offsetting  (a  morally  bad  act  motivates  a subsequent morally good act) or self-licensing (a morally good act motivates a subsequent morally bad act). That is, we need a theory of karmic accounting—how rights and wrongs add up in moral judgment.

Friday, October 25, 2019

Deciding Versus Reacting:Conceptions of Moral Judgment and the Reason-Affect Debate

Monin, B., Pizarro, D. A., & Beer, J. S. (2007).
Review of General Psychology, 11(2), 99–111.
https://doi.org/10.1037/1089-2680.11.2.99

Abstract

Recent approaches to moral judgment have typically pitted emotion against reason. In an effort to move beyond this debate, we propose that authors presenting diverging models are considering quite different prototypical situations: those focusing on the resolution of complex dilemmas conclude that morality involves sophisticated reasoning, whereas those studying reactions to shocking moral violations ļ¬nd that morality involves quick, affect-laden processes. We articulate these diverging dominant approaches and consider three directions for future research (moral temptation, moral self-image, and lay understandings of morality) that we propose have not received sufļ¬cient attention as a result of the focus on these two prototypical situations within moral psychology.

Concluding Thoughts

Recent theorizing on the psychology of moral decision making has pitted deliberative reasoning against quick affect-laden intuitions. In this article, we propose a resolution to this tension by arguing that it results from a choice of different prototypical situations: advocates of the reasoning approach have focused on sophisticated dilemmas, whereas advocates of the intuition/emotion approach have focused on reactions to other people’s moral infractions. Arbitrarily choosing one or the other as the typical moral situation has a signiļ¬cant impact on one’s characterization of moral judgment.

Saturday, August 10, 2019

Emotions and beliefs about morality can change one another

Monica Bucciarelli and P.N. Johnson-Laird
Acta Psychologica
Volume 198, July 2019

Abstract

A dual-process theory postulates that belief and emotions about moral assertions can affect one another. The present study corroborated this prediction. Experiments 1, 2 and 3 showed that the pleasantness of a moral assertion – from loathing it to loving it – correlated with how strongly individuals believed it, i.e., its subjective probability. But, despite repeated testing, this relation did not occur for factual assertions. To create the correlation, it sufficed to change factual assertions, such as, “Advanced countries are democracies,” into moral assertions, “Advanced countries should be democracies”. Two further experiments corroborated the two-way causal relations for moral assertions. Experiment 4 showed that recall of pleasant memories about moral assertions increased their believability, and that the recall of unpleasant memories had the opposite effect. Experiment 5 showed that the creation of reasons to believe moral assertions increased the pleasantness of the emotions they evoked, and that the creation of reasons to disbelieve moral assertions had the opposite effect. Hence, emotions can change beliefs about moral assertions; and reasons can change emotions about moral assertions. We discuss the implications of these results for alternative theories of morality.

The research is here.

Here is a portion of the Discussion:

In sum, emotions and beliefs correlate for moral assertions, and a change in one can cause a change in the other. The main theoretical problem is to explain these results. They should hardly surprise Utilitarians. As we mentioned in the Introduction, one interpretation of their views (Jon Baron, p.c.) is that it is tautological to predict that if you believe a moral assertion then you will like it. And this interpretation implies that our experiments are studies in semantics, which corroborate the existence of tautologies depending on the meanings of words (contra to Quine, 1953; cf. Quelhas, Rasga, & Johnson-Laird, 2017). But, the degrees to which participants believed the moral assertions varied from certain to impossible.  An assertion that they rated as probable as not is hardly a tautology, and it tended to occur with an emotional reaction of indifference. The hypothesis of a tautological interpretation cannot explain this aspect of an overall correlation in ratings on scales.

Saturday, March 30, 2019

AI Safety Needs Social Scientists

Geoffrey Irving and Amanda Askell
distill.pub
Originally published February 19, 2019

Here is an excerpt:

Learning values by asking humans questions

We start with the premise that human values are too complex to describe with simple rules. By “human values” we mean our full set of detailed preferences, not general goals such as “happiness” or “loyalty”. One source of complexity is that values are entangled with a large number of facts about the world, and we cannot cleanly separate facts from values when building ML models. For example, a rule that refers to “gender” would require an ML model that accurately recognizes this concept, but Buolamwini and Gebru found that several commercial gender classifiers with a 1% error rate on white men failed to recognize black women up to 34% of the time. Even where people have correct intuition about values, we may be unable to specify precise rules behind these intuitions. Finally, our values may vary across cultures, legal systems, or situations: no learned model of human values will be universally applicable.

If humans can’t reliably report the reasoning behind their intuitions about values, perhaps we can make value judgements in specific cases. To realize this approach in an ML context, we ask humans a large number of questions about whether an action or outcome is better or worse, then train on this data. “Better or worse” will include both factual and value-laden components: for an AI system trained to say things, “better” statements might include “rain falls from clouds”, “rain is good for plants”, “many people dislike rain”, etc. If the training works, the resulting ML system will be able to replicate human judgement about particular situations, and thus have the same “fuzzy access to approximate rules” about values as humans. We also train the ML system to come up with proposed actions, so that it knows both how to perform a task and how to judge its performance. This approach works at least in simple cases, such as Atari games and simple robotics tasks and language-specified goals in gridworlds. The questions we ask change as the system learns to perform different types of actions, which is necessary as the model of what is better or worse will only be accurate if we have applicable data to generalize from.

The info is here.

Monday, February 4, 2019

(Ideo)Logical Reasoning: Ideology Impairs Sound Reasoning

Anup Gampa, Sean Wojcik, Matt Motyl, Brian Nosek, & Pete Ditto
PsycArXiv
Originally posted January 15, 2019
 
Abstract

Beliefs shape how people interpret information and may impair how people engage in logical reasoning. In 3 studies, we show how ideological beliefs impair people's ability to: (1) recognize logical validity in arguments that oppose their political beliefs, and, (2) recognize the lack of logical validity in arguments that support their political beliefs. We observed belief bias effects among liberals and conservatives who evaluated the logical soundness of classically structured logical syllogisms supporting liberal or conservative beliefs. Both liberals and conservatives frequently evaluated the logical structure of entire arguments based on the believability of arguments’ conclusions, leading to predictable patterns of logical errors. As a result, liberals were better at identifying flawed arguments supporting conservative beliefs and conservatives were better at identifying flawed arguments supporting liberal beliefs. These findings illuminate one key mechanism for how political beliefs distort people’s abilities to reason about political topics soundly.

The research is here.