Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Cognitive Control. Show all posts
Showing posts with label Cognitive Control. Show all posts

Monday, March 6, 2023

Cognitive control and dishonesty

Speer, S. P., Smidts, A., & Boksem, M. A. (2022b).
Trends in Cognitive Sciences, 26(9), 796–808.
https://doi.org/10.1016/j.tics.2022.06.005

Abstract

Dishonesty is ubiquitous and imposes substantial financial and social burdens on society. Intuitively, dishonesty results from a failure of willpower to control selfish behavior. However, recent research suggests that the role of cognitive control in dishonesty is more complex. We review evidence that cognitive control is not needed to be honest or dishonest per se, but that it depends on individual differences in what we call one’s ‘moral default’: for those who are prone to dishonesty, cognitive control indeed aids in being honest, but for those who are already generally honest, cognitive control may help them cheat to occasionally profit from small acts of dishonesty. Thus, the role of cognitive control in (dis)honesty is to override the moral default.

Significance

The precise role of cognitive control in dishonesty has been debated for many years, but now important strides have been made to resolve this debate.

Recently developed paradigms that allow for investigating dishonesty on the level of the choice rather than on the level of the individual have substantially improved our understanding of the adaptive role of cognitive control in (dis)honesty.

These new paradigms revealed that the role of cognitive control differs across people: for cheaters, it helps them to sometimes be honest, while for those who are generally honest, it allows them to cheat on occasion. Thus, cognitive control is not required for (dis)honesty per se but is required to override one’s moral default to be either honest or to cheat.

Individual differences in moral default are driven by balancing motivation for reward and upholding a moral self-image.

From Concluding remarks

The Will and Grace hypotheses have been debated for quite some time, but recently important strides have been made to resolve this debate. Key elements in this proposed resolution are (i) recognizing that there is heterogeneity between individuals, some default more towards honesty, whereas others have a stronger inclination towards dishonesty; (ii) recognizing that there is heterogeneity within individuals, cheaters can be honest sometimes and honest people do cheat on occasion; and (iii) the development of experimental paradigms that allow dishonesty to be investigated on the level of the choice, rather than only on the level of the individual or the group. These developments have substantially enhanced understanding of the role of cognitive control in (dis)honesty: it is not required for being honest or dishonest per se, but it is required to override one’s moral default to either be honest or to cheat (Figure 1).

These insights open up novel research agendas and offer suggestions as to how to develop interventions to curtail dishonesty. Our review suggests three processes that may be targeted by such interventions: reward seeking, self-referential thinking, and cognitive control. Shaping contexts in ways that are conducive to honesty by targeting these processes may go a long way to increase honesty in everyday behavior.

Wednesday, March 1, 2023

Cognitive Control Promotes Either Honesty or Dishonesty, Depending on One's Moral Default

Speer, S. P., Smidts, A., & Boksem, M. A. S. (2021).
The Journal of Neuroscience, 41(42), 8815–8825. 
https://doi.org/10.1523/jneurosci.0666-21.2021

Abstract

Cognitive control is crucially involved in making (dis)honest decisions. However, the precise nature of this role has been hotly debated. Is honesty an intuitive response, or is will power needed to override an intuitive inclination to cheat? A reconciliation of these conflicting views proposes that cognitive control enables dishonest participants to be honest, whereas it allows those who are generally honest to cheat. Thus, cognitive control does not promote (dis)honesty per se; it depends on one's moral default. In the present study, we tested this proposal using electroencephalograms in humans (males and females) in combination with an independent localizer (Stroop task) to mitigate the problem of reverse inference. Our analysis revealed that the neural signature evoked by cognitive control demands in the Stroop task can be used to estimate (dis)honest choices in an independent cheating task, providing converging evidence that cognitive control can indeed help honest participants to cheat, whereas it facilitates honesty for cheaters.

Significance Statement

Dishonesty causes enormous economic losses. To target dishonesty with interventions, a rigorous understanding of the underlying cognitive mechanisms is required. A recent study found that cognitive control enables honest participants to cheat, whereas it helps cheaters to be honest. However, it is evident that a single study does not suffice as support for a novel hypothesis. Therefore, we tested the replicability of this finding using a different modality (EEG instead of fMRI) together with an independent localizer task to avoid reverse inference. We find that the same neural signature evoked by cognitive control demands in the localizer task can be used to estimate (dis)honesty in an independent cheating task, establishing converging evidence that the effect of cognitive control indeed depends on a person's moral default.

From the Discussion section

Previous research has deduced the involvement of cognitive control in moral decision-making through relating observed activations to those observed for cognitive control tasks in prior studies (Greene and Paxton, 2009; Abe and Greene, 2014) or with the help of meta-analytic evidence (Speer et al., 2020) from the Neurosynth platform (Yarkoni et al., 2011). This approach, which relies on reverse inference, must be used with caution because any given brain area may be involved in several different cognitive processes, which makes it difficult to conclude that activation observed in a particular brain area represents one specific function (Poldrack, 2006). Here, we extend prior research by providing more rigorous evidence by means of explicitly eliciting cognitive control in a separate localizer task and then demonstrating that this same neural signature can be identified in the Spot-The-Difference task when participants are exposed to the opportunity to cheat. Moreover, using similarity analysis we provide a direct link between the neural signature of cognitive control, as elicited by the Stroop task, and (dis)honesty by showing that time-frequency patterns of cognitive control demands in the Stroop task are indeed similar to those observed when tempted to cheat in the Spot-The-Difference task. These results provide strong evidence that cognitive control processes are recruited when individuals are tempted to cheat.

Monday, February 25, 2019

Information Processing Biases in the Brain: Implications for Decision-Making and Self-Governance

Sali, A.W., Anderson, B.A. & Courtney, S.M.
Neuroethics (2018) 11: 259.
https://doi.org/10.1007/s12152-016-9251-1

Abstract

To make behavioral choices that are in line with our goals and our moral beliefs, we need to gather and consider information about our current situation. Most information present in our environment is not relevant to the choices we need or would want to make and thus could interfere with our ability to behave in ways that reflect our underlying values. Certain sources of information could even lead us to make choices we later regret, and thus it would be beneficial to be able to ignore that information. Our ability to exert successful self-governance depends on our ability to attend to sources of information that we deem important to our decision-making processes. We generally assume that, at any moment, we have the ability to choose what we pay attention to. However, recent research indicates that what we pay attention to is influenced by our prior experiences, including reward history and past successes and failures, even when we are not aware of this history. Even momentary distractions can cause us to miss or discount information that should have a greater influence on our decisions given our values. Such biases in attention thus raise questions about the degree to which the choices that we make may be poorly informed and not truly reflect our ability to otherwise exert self-governance.

Here is part of the Conclusion:

In order to consistently make decisions that reflect our goals and values, we need to gather the information necessary to guide these decisions, and ignore information that is irrelevant. Although the momentary acquisition of irrelevant information will not likely change our goals, biases in attentional selection may still profoundly influence behavioral outcomes, tipping the balance between competing options when faced with a single goal (e.g., save the least competent swimmer) or between simultaneously competing goals (e.g., relieve drug craving and withdrawal symptoms vs. maintain abstinence). An important component of self-governance might, therefore, be the ability to exert control over how we represent our world as we consider different potential courses of action.

Wednesday, June 7, 2017

On the cognitive (neuro)science of moral cognition: utilitarianism, deontology and the ‘fragmentation of value’

Alejandro Rosas
Working Paper: May 2017

Abstract

Scientific explanations of human higher capacities, traditionally denied to other animals, attract the attention of both philosophers and other workers in the humanities. They are often viewed with suspicion and skepticism. In this paper I critically examine the dual-process theory of moral judgment proposed by Greene and collaborators and the normative consequences drawn from that theory. I believe normative consequences are warranted, in principle, but I propose an alternative dual-process model of moral cognition that leads to a different normative consequence, which I dub ‘the fragmentation of value’. In the alternative model, the neat overlap between the deontological/utilitarian divide and the intuitive/reflective divide is abandoned. Instead, we have both utilitarian and deontological intuitions, equally fundamental and partially in tension. Cognitive control is sometimes engaged during a conflict between intuitions. When it is engaged, the result of control is not always utilitarian; sometimes it is deontological. I describe in some detail how this version is consistent with evidence reported by many studies, and what could be done to find more evidence to support it.

The working paper is here.