Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Decision-making. Show all posts
Showing posts with label Decision-making. Show all posts

Sunday, October 23, 2022

Advancing theorizing about fast-and-slow thinking

De Neys, W. (2022). 
Behavioral and Brain Sciences, 1-68. 
doi:10.1017/S0140525X2200142X

Abstract

Human reasoning is often conceived as an interplay between a more intuitive and deliberate thought process. In the last 50 years, influential fast-and-slow dual process models that capitalize on this distinction have been used to account for numerous phenomena—from logical reasoning biases, over prosocial behavior, to moral decision-making. The present paper clarifies that despite the popularity, critical assumptions are poorly conceived. My critique focuses on two interconnected foundational issues: the exclusivity and switch feature. The exclusivity feature refers to the tendency to conceive intuition and deliberation as generating unique responses such that one type of response is assumed to be beyond the capability of the fast-intuitive processing mode. I review the empirical evidence in key fields and show that there is no solid ground for such exclusivity. The switch feature concerns the mechanism by which a reasoner can decide to shift between more intuitive and deliberate processing. I present an overview of leading switch accounts and show that they are conceptually problematic—precisely because they presuppose exclusivity. I build on these insights to sketch the groundwork for a more viable dual process architecture and illustrate how it can set a new research agenda to advance the field in the coming years.

Conclusion

In the last 50 years dual process models of thinking have moved to the center stage in research on human reasoning. These models have been instrumental for the initial exploration of human thinking in the cognitive sciences and related fields (Chater, 2018; De Neys, 2021). However, it is time to rethink foundational assumptions. Traditional dual process models have typically conceived intuition and deliberation as generating unique responses such that one type of response is exclusively tied to deliberation and is assumed to be beyond the reach of the intuitive system. I reviewed empirical evidence from key dual process applications that argued against this exclusivity feature. I also showed how exclusivity leads to conceptual complications when trying to explain how a reasoner switches between intuitive and deliberate reasoning. To avoid these complications, I sketched an elementary non-exclusive working model in which it is the activation strength of competing intuitions within System 1 that determines System 2 engagement. 

It will be clear that the working model is a starting point that will need to be further developed and specified. However, by avoiding the conceptual paradoxes that plague the traditional model, it presents a more viable basic architecture that can serve as theoretical groundwork to build future dual process models in various fields. In addition, it should at the very least force dual process theorists to specify more explicitly how they address the switch issue. In the absence of such specification, dual process models might continue to provide an appealing narrative but will do little to advance our understanding of the interaction between intuitive and deliberate— fast and slow—thinking. It is in this sense that I hope that the present paper can help to sketch the building blocks of a more judicious dual process future. 

Monday, October 17, 2022

The Psychological Origins of Conspiracy Theory Beliefs: Big Events with Small Causes Amplify Conspiratorial Thinking

Vonasch, A., Dore, N., & Felicite, J.
(2022, January 20). 
https://doi.org/10.31234/osf.io/3j9xg

Abstract

Three studies supported a new model of conspiracy theory belief: People are most likely to believe conspiracy theories that explain big, socially important events with smaller, intuitively unappealing official explanations. Two experiments (N = 577) used vignettes about fictional conspiracy theories and measured online participants’ beliefs in the official causes of the events and the corresponding conspiracy theories. We experimentally manipulated the size of the event and its official cause. Larger events and small official causes decreased belief in the official cause and this mediated increased belief in the conspiracy theory, even after controlling for individual differences in paranoia and distrust. Study 3 established external validity and generalizability by coding the 78 most popular conspiracy theories on Reddit. Nearly all (96.7%) popular conspiracy theories explain big, socially important events with smaller, intuitively unappealing official explanations. By contrast, events not producing conspiracy theories often have bigger explanations.

General Discussion

Three studies supported the HOSE (heuristic of sufficient explanation) of conspiracy theory belief. Nearly all popular conspiracy theories sampled were about major events with small official causes deemed too small to sufficiently explain the event. Two experiments involving invented conspiracy theories supported the proposed causal mechanism. People were less likely to believe the official explanation was true because it was relatively small and the event was relatively big. People’s beliefs in the conspiracy theory were mediated by their disbelief in the official explanation. Thus, one reason people believe conspiracy theories is because they offer a bigger explanation for a seemingly implausibly large effect of a small cause.

HOSE helps explain why certain conspiracy theories become popular but others do not. Like evolutionarily fit genes are especially likely to spread to subsequent generations, ideas (memes) with certain qualities are most likely to spread and thus become popular (Dawkins, 1976). HOSE explains that conspiracy theories spread widely because people are strongly motivated to learn an explanation for important events (Douglas, et al., 2017; 2019), and are usually unsatisfied with counterintuitively small explanations that seem insufficient to explain things. Conspiracy theories are typically inspired by events that people perceive to be larger than their causes could plausibly produce. Some conspiracy theories may be inevitable because small causes do sometimes counterintuitively cause big events: via the exponential spread of a microscopic virus or the interconnected, chaotic nature of events like the flap of a butterfly’s wings changing weather across the world (Gleick, 2008). Therefore, itmay be impossible to prevent all conspiracy theories from developing.

Sunday, August 21, 2022

Medial and orbital frontal cortex in decision-making and flexible behavior

Klein-Flügge, M. C., Bongioanni, A., & 
Rushworth, M. F. (2022).
Neuron.
https://doi.org/10.1016/j.neuron.2022.05.022

Summary

The medial frontal cortex and adjacent orbitofrontal cortex have been the focus of investigations of decision-making, behavioral flexibility, and social behavior. We review studies conducted in humans, macaques, and rodents and argue that several regions with different functional roles can be identified in the dorsal anterior cingulate cortex, perigenual anterior cingulate cortex, anterior medial frontal cortex, ventromedial prefrontal cortex, and medial and lateral parts of the orbitofrontal cortex. There is increasing evidence that the manner in which these areas represent the value of the environment and specific choices is different from subcortical brain regions and more complex than previously thought. Although activity in some regions reflects distributions of reward and opportunities across the environment, in other cases, activity reflects the structural relationships between features of the environment that animals can use to infer what decision to take even if they have not encountered identical opportunities in the past.

Summary

Neural systems that represent the value of the environment exist in many vertebrates. An extended subcortical circuit spanning the striatum, midbrain, and brainstem nuclei of mammals corresponds to these ancient systems. In addition, however, mammals possess several frontal cortical regions concerned with guidance of decision-making and adaptive, flexible behavior. Although these frontal systems interact extensively with these subcortical circuits, they make specific contributions to behavior and also influence behavior via other cortical routes. Some areas such as the ACC, which is present in a broad range of mammals, represent the distribution of opportunities in an environment over space and time, whereas other brain regions such as amFC and dmPFC have roles in representing structural associations and causal links between environmental features, including aspects of the social environment (Figure 8). Although the origins of these areas and their functions are traceable to rodents, they are especially prominent in primates. They make it possible not just to select choices on the basis of past experience of identical situations, but to make inferences to guide decisions in new scenarios.

Friday, August 5, 2022

The Neuroscience Behind Bad Decisions

Emily Singer
Quanta Magazine
Originally posted 13 AUG 16

Here are excerpts:

Economists have spent more than 50 years cataloging irrational choices like these. Nobel Prizes have been earned; millions of copies of Freakonomics have been sold. But economists still aren’t sure why they happen. “There had been a real cottage industry in how to explain them and lots of attempts to make them go away,” said Eric Johnson, a psychologist and co-director of the Center for Decision Sciences at Columbia University. But none of the half-dozen or so explanations are clear winners, he said.

In the last 15 to 20 years [this article was written in 2016], neuroscientists have begun to peer directly into the brain in search of answers. “Knowing something about how information is represented in the brain and the computational principles of the brain helps you understand why people make decisions how they do,” said Angela Yu, a theoretical neuroscientist at the University of California, San Diego.

Glimcher is using both the brain and behavior to try to explain our irrationality. He has combined results from studies like the candy bar experiment with neuroscience data — measurements of electrical activity in the brains of animals as they make decisions — to develop a theory of how we make decisions and why that can lead to mistakes.

(cut)

But the decision-making system operates under more complex constraints and has to consider many different types of information. For example, a person might choose which house to buy depending on its location, size or style. But the relative importance of each of these factors, as well as their optimal value — city or suburbs, Victorian or modern — is fundamentally subjective. It varies from person to person and may even change for an individual depending on their stage of life. “There is not one simple, easy-to-measure mathematical quantity like redundancy that decision scientists universally agree on as being a key factor in the comparison of competing alternatives,” Yu said.

She suggests that uncertainty in how we value different options is behind some of our poor decisions. “If you’ve bought a lot of houses, you’ll evaluate houses differently than if you were a first-time homebuyer,” Yu said. “Or if your parents bought a house during the housing crisis, it may later affect how you buy a house.”

Moreover, Yu argues, the visual and decision-making systems have different end-goals. “Vision is a sensory system whose job is to recover as much information as possible from the world,” she said. “Decision-making is about trying to make a decision you’ll enjoy. I think the computational goal is not just information, it’s something more behaviorally relevant like total enjoyment.”

For many of us, the main concern over decision-making is practical — how can we make better decisions? Glimcher said that his research has helped him develop specific strategies. “Rather than pick what I hope is the best, instead I now always start by eliminating the worst element from a choice set,” he said, reducing the number of options to something manageable, like three.


Curator's note: Oddly enough, this last sentence is what personalized algorithms do.  Pushing people to limited options has both positive and negative aspects.  While it may help with decision-making, it also helps with political polarization.

Monday, July 18, 2022

The One That Got Away: Overestimation of Forgone Alternatives as a Hidden Source of Regret

Feiler, D., & Müller-Trede, J. (2022).
Psychological Science, 33(2), 314–324.
https://doi.org/10.1177/09567976211032657

Abstract

Past research has established that observing the outcomes of forgone alternatives is an important driver of regret. In this research, we predicted and empirically corroborated a seemingly opposite result: Participants in our studies were more likely to experience regret when they did not observe a forgone outcome than when it was revealed. Our prediction drew on two theoretical observations. First, feelings of regret frequently stem from comparing a chosen option with one’s belief about what the forgone alternative would have been. Second, when there are many alternatives to choose from under uncertainty, the perceived attractiveness of the almost-chosen alternative tends to exceed its reality. In four preregistered studies (Ns = 800, 599, 150, and 197 adults), we found that participants predictably overestimated the forgone path, and this overestimation caused undue regret. We discuss the psychological implications of this hidden source of regret and reconcile the ostensible contradiction with past research.

Statement of Relevance

Reflecting on our past decisions can often make us feel regret. Previous research suggests that feelings of regret stem from comparing the outcome of our chosen path with that of the unchosen path.  We present a seemingly contradictory finding: Participants in our studies were more likely to experience regret when they did not observe the forgone outcome than when they saw it. This effect arises because when there are many paths to choose from, and uncertainty exists about how good each would be, people tend to overestimate the almost-chosen path. An idealized view of the path not taken then becomes an unfair standard of comparison for the chosen path, which inflates feelings of regret. Excessive regret has been found to be associated with depression and anxiety, and our work suggests that there may be a hidden source of undue regret—overestimation of forgone paths—that may contribute to these problems.

The ending...

Finally, is overestimating the paths we do not take causing us too much regret? Although regret can have
benefits for experiential learning, it is an inherently negative emotion and has been found to be associated with depression and excessive anxiety (Kocovski et al., 2005; Markman & Miller, 2006; Roese et al., 2009). Because the regret in our studies was driven by biased beliefs, it may be excessive—after all, better-calibrated beliefs about forgone alternatives would cause less regret. Whether calibrating beliefs about forgone alternatives could also help in alleviating regret’s harmful psychological consequences is an important question for future research.


Important implications for psychotherapy....

Friday, June 17, 2022

Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making

S. Tolmeijer, M. Christen, et al.
In CHI Conference on Human Factors in 
Computing Systems (CHI '22), April 29-May 5,
2022, New Orleans, LA, USA. ACM

While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants’ reliance on AI: AI recommendations and decisions are accepted more often than the human expert's. However, AI team experts are perceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead.

From the Discussion Section

Design implications for ethical AI

In sum, we find that participants had slightly higher moral trust and more responsibility ascription towards human experts, but higher capacity trust, overall trust, and reliance on AI. These different perceived capabilities could be combined in some form of human-AI collaboration. However, lack of responsibility of the AI can be a problem when AI for ethical decision making is implemented. When a human expert is involved but has less autonomy, they risk becoming a scapegoat for the decisions that the AI proposed in case of negative outcomes.

At the same time, we find that the different levels of autonomy, i.e., the human-in-the-loop and human-on-the-loop setting, did not influence the trust people had, the responsibility they assigned (both to themselves and the respective experts), and the reliance they displayed. A large part of the discussion on usage of AI has focused on control and the level of autonomy that the AI gets for different tasks. However, our results suggest that this has less of an influence, as long a human is appointed to be responsible in the end. Instead, an important focus of designing AI for ethical decision making should be on the different types of trust users show for a human vs. AI expert.

One conclusion of this finding that the control conditions of AI may be of less relevance than expected is that the focus on human-AI collaboration should be less on control and more on how the involvement of AI improves human ethical decision making. An important factor in that respect will be the time available for actual decision making: if time is short, AI advice or decisions should make clear which value was guiding in the decision process (e.g., maximizing the expected number of people to be saved irrespective of any characteristics of the individuals involved), such that the human decider can make (or evaluate) the decision in an ethically informed way. If time for deliberation is available, a AI decision support system could be designed in a way to counteract human biases in ethical decision making (e.g., point to the possibility that human deciders solely focus on utility maximization and in this way neglecting fundamental rights of individuals) such that those biases can become part of the deliberation process.

Sunday, June 12, 2022

You Were Right About COVID, and Then You Weren’t

Olga Khazan
The Atlantic
Originally posted 3 MAY 22

Here are two excerpts:

Tenelle Porter, a psychologist at UC Davis, studies so-called intellectual humility, or the recognition that we have imperfect information and thus our beliefs might be wrong. Practicing intellectual humility, she says, is harder when you’re very active on the internet, or when you’re operating in a cutthroat culture. That might be why it pains me—a very online person working in the very competitive culture of journalism—to say that I was incredibly wrong about COVID at first. In late February 2020, when Smith was sounding the alarm among his co-workers, I had drinks with a colleague who asked me if I was worried about “this new coronavirus thing.”

“No!” I said. After all, I had covered swine flu, which blew over quickly and wasn’t very deadly.

A few days later, my mom called and asked me the same question. “People in Italy are staying inside their houses,” she pointed out.

“Yeah,” I said. “But SARS and MERS both stayed pretty localized to the regions they originally struck.”

Then, a few weeks later, when we were already working from home and buying dried beans, a friend asked me if she should be worried about her wedding, which was scheduled for October 2020.

“Are you kidding?” I said. “They will have figured out a vaccine or something by then.” Her wedding finally took place this month.

(cut)

Thinking like a scientist, or a scout, means “recognizing that every single one of your opinions is a hypothesis waiting to be tested. And every decision you make is an experiment where you forgot to have a control group,” Grant said. The best way to hold opinions or make predictions is to determine what you think given the state of the evidence—and then decide what it would take for you to change your mind. Not only are you committing to staying open-minded; you’re committing to the possibility that you might be wrong.

Because the coronavirus has proved volatile and unpredictable, we should evaluate it as a scientist would. We can’t hold so tightly to prior beliefs that we allow them to guide our behavior when the facts on the ground change. This might mean that we lose our masks one month and don them again the next, or reschedule an indoor party until after case numbers decrease. It might mean supporting strict lockdowns in the spring of 2020 but not in the spring of 2022. It might even mean closing schools again, if a new variant seems to attack children. We should think of masks and other COVID precautions not as shibboleths but like rain boots and umbrellas, as Ashish Jha, the White House coronavirus-response coordinator, has put it. There’s no sense in being pro- or anti-umbrella. You just take it out when it’s raining.

Monday, May 30, 2022

Free will without consciousness?

L. Mudrik, I. G. Arie, et al.
Trends in Cognitive Sciences
Available online 12 April 2022

Abstract

Findings demonstrating decision-related neural activity preceding volitional actions have dominated the discussion about how science can inform the free will debate. These discussions have largely ignored studies suggesting that decisions might be influenced or biased by various unconscious processes. If these effects are indeed real, do they render subjects’ decisions less free or even unfree? Here, we argue that, while unconscious influences on decision-making do not threaten the existence of free will in general, they provide important information about limitations on freedom in specific circumstances. We demonstrate that aspects of this long-lasting controversy are empirically testable and provide insight into their bearing on degrees of freedom, laying the groundwork for future scientific-philosophical approaches.

Highlights
  • A growing body of literature argues for unconscious effects on decision-making.
  • We review a body of such studies while acknowledging methodological limitations, and categorize the types of unconscious influence reported.
  • These effects intuitively challenge free will, despite being generally overlooked in the free will literature. To what extent can decisions be free if they are affected by unconscious factors?
  • Our analysis suggests that unconscious influences on behavior affect degrees of control or reasons-responsiveness. We argue that they do not threaten the existence of free will in general, but only the degree to which we can be free in specific circumstances.

Concluding remarks

Current findings of unconscious effects on decision-making do not threaten the existence of free will in general. Yet, the results still show ways in which our freedom can be compromised under specific circumstances. More experimental and philosophical work is needed to delineate the limits and scope of these effects on our freedom (see Outstanding questions). We have evolved to be the decision-makers that we are; thus, our decisions are affected by biases, internal states, and external contexts. However, we can at least sometimes resist those, if we want, and this ability to resist influences contrary to our preferences and reasons is considered a central feature of freedom. As long as this ability is preserved, and the reviewed findings do not suggest otherwise, we are still free, at least usually and to a significant degree.

Tuesday, May 17, 2022

Why it’s so damn hard to make AI fair and unbiased

Sigal Samuel
Vox.com
Originally posted 19 APR 2022

Here is an excerpt:

So what do big players in the tech space mean, really, when they say they care about making AI that’s fair and unbiased? Major organizations like Google, Microsoft, even the Department of Defense periodically release value statements signaling their commitment to these goals. But they tend to elide a fundamental reality: Even AI developers with the best intentions may face inherent trade-offs, where maximizing one type of fairness necessarily means sacrificing another.

The public can’t afford to ignore that conundrum. It’s a trap door beneath the technologies that are shaping our everyday lives, from lending algorithms to facial recognition. And there’s currently a policy vacuum when it comes to how companies should handle issues around fairness and bias.

“There are industries that are held accountable,” such as the pharmaceutical industry, said Timnit Gebru, a leading AI ethics researcher who was reportedly pushed out of Google in 2020 and who has since started a new institute for AI research. “Before you go to market, you have to prove to us that you don’t do X, Y, Z. There’s no such thing for these [tech] companies. So they can just put it out there.”

That makes it all the more important to understand — and potentially regulate — the algorithms that affect our lives. So let’s walk through three real-world examples to illustrate why fairness trade-offs arise, and then explore some possible solutions.

How would you decide who should get a loan?

Here’s another thought experiment. Let’s say you’re a bank officer, and part of your job is to give out loans. You use an algorithm to help you figure out whom you should loan money to, based on a predictive model — chiefly taking into account their FICO credit score — about how likely they are to repay. Most people with a FICO score above 600 get a loan; most of those below that score don’t.

One type of fairness, termed procedural fairness, would hold that an algorithm is fair if the procedure it uses to make decisions is fair. That means it would judge all applicants based on the same relevant facts, like their payment history; given the same set of facts, everyone will get the same treatment regardless of individual traits like race. By that measure, your algorithm is doing just fine.

But let’s say members of one racial group are statistically much more likely to have a FICO score above 600 and members of another are much less likely — a disparity that can have its roots in historical and policy inequities like redlining that your algorithm does nothing to take into account.

Another conception of fairness, known as distributive fairness, says that an algorithm is fair if it leads to fair outcomes. By this measure, your algorithm is failing, because its recommendations have a disparate impact on one racial group versus another.

Monday, April 18, 2022

The psychological drivers of misinformation belief and its resistance to correction

Ecker, U.K.H., Lewandowsky, S., Cook, J. et al. 
Nat Rev Psychol 1, 13–29 (2022).
https://doi.org/10.1038/s44159-021-00006-y

Abstract

Misinformation has been identified as a major contributor to various contentious contemporary events ranging from elections and referenda to the response to the COVID-19 pandemic. Not only can belief in misinformation lead to poor judgements and decision-making, it also exerts a lingering influence on people’s reasoning after it has been corrected — an effect known as the continued influence effect. In this Review, we describe the cognitive, social and affective factors that lead people to form or endorse misinformed views, and the psychological barriers to knowledge revision after misinformation has been corrected, including theories of continued influence. We discuss the effectiveness of both pre-emptive (‘prebunking’) and reactive (‘debunking’) interventions to reduce the effects of misinformation, as well as implications for information consumers and practitioners in various areas including journalism, public health, policymaking and education.

Summary and future directions

Psychological research has built solid foundational knowledge of how people decide what is true and false, form beliefs, process corrections, and might continue to be influenced by misinformation even after it has been corrected. However, much work remains to fully understand the psychology of misinformation.

First, in line with general trends in psychology and elsewhere, research methods in the field of misinformation should be improved. Researchers should rely less on small-scale studies conducted in the laboratory or a small number of online platforms, often on non-representative (and primarily US-based) participants. Researchers should also avoid relying on one-item questions with relatively low reliability. Given the well-known attitude–behaviour gap — that attitude change does not readily translate into behavioural effects — researchers should also attempt to use more behavioural measures, such as information-sharing measures, rather than relying exclusively on self-report questionnaires. Although existing research has yielded valuable insights into how people generally process misinformation (many of which will translate across different contexts and cultures), an increased focus on diversification of samples and more robust methods is likely to provide a better appreciation of important contextual factors and nuanced cultural differences.

Sunday, April 17, 2022

Leveraging artificial intelligence to improve people’s planning strategies

F. Callaway, et al.
PNAS, 2022, 119 (12) e2117432119 

Abstract

Human decision making is plagued by systematic errors that can have devastating consequences. Previous research has found that such errors can be partly prevented by teaching people decision strategies that would allow them to make better choices in specific situations. Three bottlenecks of this approach are our limited knowledge of effective decision strategies, the limited transfer of learning beyond the trained task, and the challenge of efficiently teaching good decision strategies to a large number of people. We introduce a general approach to solving these problems that leverages artificial intelligence to discover and teach optimal decision strategies. As a proof of concept, we developed an intelligent tutor that teaches people the automatically discovered optimal heuristic for environments where immediate rewards do not predict long-term outcomes. We found that practice with our intelligent tutor was more effective than conventional approaches to improving human decision making. The benefits of training with our cognitive tutor transferred to a more challenging task and were retained over time. Our general approach to improving human decision making by developing intelligent tutors also proved successful for another environment with a very different reward structure. These findings suggest that leveraging artificial intelligence to discover and teach optimal cognitive strategies is a promising approach to improving human judgment and decision making.

Significance

Many bad decisions and their devastating consequences could be avoided if people used optimal decision strategies. Here, we introduce a principled computational approach to improving human decision making. The basic idea is to give people feedback on how they reach their decisions. We develop a method that leverages artificial intelligence to generate this feedback in such a way that people quickly discover the best possible decision strategies. Our empirical findings suggest that a principled computational approach leads to improvements in decision-making competence that transfer to more difficult decisions in more complex environments. In the long run, this line of work might lead to apps that teach people clever strategies for decision making, reasoning, goal setting, planning, and goal achievement.

From the Discussion

We developed an intelligent system that automatically discovers optimal decision strategies and teaches them to people by giving them metacognitive feedback while they are deciding what to do. The general approach starts from modeling the kinds of decision problems people face in the real world along with the constraints under which those decisions have to be made. The resulting formal model makes it possible to leverage artificial intelligence to derive an optimal decision strategy. To teach people this strategy, we then create a simulated decision environment in which people can safely and rapidly practice making those choices while an intelligent tutor provides immediate, precise, and accurate feedback on how they are making their decision. As described above, this feedback is designed to promote metacognitive reinforcement learning.

Monday, April 11, 2022

Distinct neurocomputational mechanisms support informational and socially normative conformity

Mahmoodi A, Nili H, et al.
(2022) PLoS Biol 20(3): e3001565. 
https://doi.org/10.1371/journal.pbio.3001565

Abstract

A change of mind in response to social influence could be driven by informational conformity to increase accuracy, or by normative conformity to comply with social norms such as reciprocity. Disentangling the behavioural, cognitive, and neurobiological underpinnings of informational and normative conformity have proven elusive. Here, participants underwent fMRI while performing a perceptual task that involved both advice-taking and advice-giving to human and computer partners. The concurrent inclusion of 2 different social roles and 2 different social partners revealed distinct behavioural and neural markers for informational and normative conformity. Dorsal anterior cingulate cortex (dACC) BOLD response tracked informational conformity towards both human and computer but tracked normative conformity only when interacting with humans. A network of brain areas (dorsomedial prefrontal cortex (dmPFC) and temporoparietal junction (TPJ)) that tracked normative conformity increased their functional coupling with the dACC when interacting with humans. These findings enable differentiating the neural mechanisms by which different types of conformity shape social changes of mind.

Discussion

A key feature of adaptive behavioural control is our ability to change our mind as new evidence comes to light. Previous research has identified dACC as a neural substrate for changes of mind in both nonsocial situations, such as when receiving additional evidence pertaining to a previously made decision, and social situations, such as when weighing up one’s own decision against the recommendation of an advisor. However, unlike the nonsocial case, the role of dACC in social changes of mind can be driven by different, and often competing, factors that are specific to the social nature of the interaction. In particular, a social change of mind may be driven by a motivation to be correct, i.e., informational influence. Alternatively, a social change of mind may be driven by reasons unrelated to accuracy—such as social acceptance—a process called normative influence. To date, studies on the neural basis of social changes of mind have not disentangled these processes. It has therefore been unclear how the brain tracks and combines informational and normative factors.

Here, we leveraged a recently developed experimental framework that separates humans’ trial-by-trial conformity into informational and normative components to unpack the neural basis of social changes of mind. On each trial, participants first made a perceptual estimate and reported their confidence in it. In support of our task rationale, we found that, while participants’ changes of mind were affected by confidence (i.e., informational) in both human and computer settings, they were only affected by the need to reciprocate influence (i.e., normative) specifically in the human–human setting. It should be noted that participants’ perception of their partners’ accuracy is also an important factor in social change of mind (we tend to change our mind towards the more accurate participants). 

Saturday, April 9, 2022

Deciding to be authentic: Intuition is favored over deliberation when authenticity matters

K. Oktar & T. Lombrozo
Cognition
Volume 223, June 2022, 105021

Abstract

Deliberative analysis enables us to weigh features, simulate futures, and arrive at good, tractable decisions. So why do we so often eschew deliberation, and instead rely on more intuitive, gut responses? We propose that intuition might be prescribed for some decisions because people's folk theory of decision-making accords a special role to authenticity, which is associated with intuitive choice. Five pre-registered experiments find evidence in favor of this claim. In Experiment 1 (N = 654), we show that participants prescribe intuition and deliberation as a basis for decisions differentially across domains, and that these prescriptions predict reported choice. In Experiment 2 (N = 555), we find that choosing intuitively vs. deliberately leads to different inferences concerning the decision-maker's commitment and authenticity—with only inferences about the decision-maker's authenticity showing variation across domains that matches that observed for the prescription of intuition in Experiment 1. In Experiment 3 (N = 631), we replicate our prior results and rule out plausible confounds. Finally, in Experiment 4 (N = 177) and Experiment 5 (N = 526), we find that an experimental manipulation of the importance of authenticity affects the prescribed role for intuition as well as the endorsement of expert human or algorithmic advice. These effects hold beyond previously recognized influences on intuitive vs. deliberative choice, such as computational costs, presumed reliability, objectivity, complexity, and expertise.

From the Discussion section

Our theory and results are broadly consistent with prior work on cross-domain variation in processing preferences (e.g., Inbar et al., 2010), as well as work showing that people draw social inferences from intuitive decisions (e.g., Tetlock, 2003). However, we bridge and extend these literatures by relating inferences made on the basis of an individual's decision to cross-domain variation in the prescribed roles of intuition and deliberation. Importantly, our work is unique in showing that neither judgments about how decisions ought to be made, nor inferences from decisions, are fully reducible to considerations of differential processing costs or the reliability of a given process for the case at hand. Our stimuli—unlike those used in prior work (e.g., Inbar et al., 2010; Pachur & Spaar, 2015)—involved deliberation costs that had already been incurred at the time of decision, yet participants nevertheless displayed substantial and systematic cross-domain variation in their inferences, processing judgments, and eventual decisions. Most dramatically, our matched-information scenarios in Experiment 3 ensured that effects were driven by decision basis alone. In addition to excluding the computational costs of deliberation and matching the decision to deliberate, these scenarios also matched the evidence available concerning the quality of each choice. Nonetheless, decisions that were based on intuition vs. deliberation were judged differently along a number of dimensions, including their authenticity.

Saturday, February 19, 2022

Meta-analysis of human prediction error for incentives, perception, cognition, and action

Corlett, P.R., Mollick, J.A. & Kober, H.
Neuropsychopharmacol. (2022). 
https://doi.org/10.1038/s41386-021-01264-3

Abstract

Prediction errors (PEs) are a keystone for computational neuroscience. Their association with midbrain neural firing has been confirmed across species and has inspired the construction of artificial intelligence that can outperform humans. However, there is still much to learn. Here, we leverage the wealth of human PE data acquired in the functional neuroimaging setting in service of a deeper understanding, using an MKDA (multi-level kernel-based density) meta-analysis. Studies were identified with Google Scholar, and we included studies with healthy adult participants that reported activation coordinates corresponding to PEs published between 1999–2018. Across 264 PE studies that have focused on reward, punishment, action, cognition, and perception, consistent with domain-general theoretical models of prediction error we found midbrain PE signals during cognitive and reward learning tasks, and an insula PE signal for perceptual, social, cognitive, and reward prediction errors. There was evidence for domain-specific error signals––in the visual hierarchy during visual perception, and the dorsomedial prefrontal cortex during social inference. We assessed bias following prior neuroimaging meta-analyses and used family-wise error correction for multiple comparisons. This organization of computation by region will be invaluable in building and testing mechanistic models of cognitive function and dysfunction in machines, humans, and other animals. Limitations include small sample sizes and ROI masking in some included studies, which we addressed by weighting each study by sample size, and directly comparing whole brain vs. ROI-based results.

Discussion

There appeared to be regionally compartmentalized PEs for primary and secondary rewards. Primary rewards elicited PEs in the dorsal striatum and amygdala, while secondary reward PEs were in ventral striatum. This is consistent with the representational transition that occurs with learning. We also found separable PEs for valence domains: caudal regions of the caudate-putamen are involved in the learning of safety signals and avoidance learning, more anterior striatum is selective for rewards, while more posterior is selective for losses. We found posterior midbrain aversive PE, consistent with preclinical findings that dopamine neurons––which respond to negative valence––are located more posteriorly in the midbrain and project to medial prefrontal regions. Additionally, we found both appetitive and aversive PEs in the amygdala, consistent with animal studies. The presence of both appetitive and aversive PE signals in the amygdala is consistent with its expanding role regulating learning based on surprise and uncertainty rather than fear per se. 

Perhaps conspicuous in its absence, given preclinical work, is the hippocampus, which is often held to be a nexus for reward PE, memory PE, and perceptual PE. This may be because the hippocampus is constantly and commonly engaged throughout task performance. Its PEs may not be resolved by the sluggish BOLD response, which is based on local field potentials and may represent the projections into a region (and therefore the striatal PE signals we observed may be the culmination of the processing in CA1, CA3, and subiculum). Furthermore, we have only recently been able to image subfields of the hippocampus (with higher field strengths and more rapid sequences); as higher resolution PE papers accrue we will revisit the meta-analysis of PEs.

Sunday, January 23, 2022

Free will beliefs are better predicted by dualism than determinism beliefs across different cultures

Wisniewski D, Deutschländer R, Haynes J-D 
(2019) PLoS ONE 14(9): e0221617. 
https://doi.org/10.1371/journal.pone.0221617

Abstract

Most people believe in free will. Whether this belief is warranted or not, free will beliefs (FWB) are foundational for many legal systems and reducing FWB has effects on behavior from the motor to the social level. This raises the important question as to which specific FWB people hold. There are many different ways to conceptualize free will, and some might see physical determinism as a threat that might reduce FWB, while others might not. Here, we investigate lay FWB in a large, representative, replicated online survey study in the US and Singapore (n = 1800), assessing differences in FWB with unprecedented depth within and between cultures. Specifically, we assess the relation of FWB, as measured using the Free Will Inventory, to determinism, dualism and related concepts like libertarianism and compatibilism. We find that libertarian, compatibilist, and dualist, intuitions were related to FWB, but that these intuitions were often logically inconsistent. Importantly, direct comparisons suggest that dualism was more predictive of FWB than other intuitions. Thus, believing in free will goes hand-in-hand with a belief in a non-physical mind. Highlighting the importance of dualism for FWB impacts academic debates on free will, which currently largely focus on its relation to determinism. Our findings also shed light on how recent (neuro)scientific findings might impact FWB. Demonstrating physical determinism in the brain need not have a strong impact on FWB, due to a wide-spread belief in dualism.

Conclusion

We have shown that free will beliefs in the general public are most closely related to a strong belief in dualism. This was true in different cultures, age groups, and levels of education. As noted in the beginning, recent neuroscientific findings have been taken to suggest that our choices might originate from unconscious brain activity, but see, which has led some to predict an erosion of free will beliefs with potentially serious consequences for our sense of responsibility and even the criminal justice system. However, even if neuroscience were to fully describe and explain the causal chain of processes in the physical brain, this need not lead to an erosion of free will beliefs in the general public. Although some might indeed see this as a threat to free will (US citizens with low dualism beliefs), most will not likely because of a wide-spread belief in dualism (see also [21]). Our findings also highlight the need for cross-cultural examinations of free will beliefs and related constructs, as previous findings from (mostly undergraduate) US samples do not fully generalize to other cultures.

Sunday, January 16, 2022

The effect of gender and parenting daughters on judgments of morally controversial companies

Niszczota P, Białek M (2021)
PLoS ONE 16(12): e0260503.

Abstract

Earlier findings suggest that men with daughters make judgments and decisions somewhat in line with those made by women. In this paper, we attempt to extend those findings, by testing how gender and parenting daughters affect judgments of the appropriateness of investing in and working for morally controversial companies (“sin stocks”). To do so, in Study 1 (N = 634) we investigate whether women judge the prospect of investing in sin stocks more harshly than men do, and test the hypothesis that men with daughters judge such investments less favorably than other men. In Study 2 (N = 782), we investigate the willingness to work in morally controversial companies at a significant wage premium. Results show that—for men—parenting daughters yields harsher evaluations of sin stocks, but no evidence that it lowers the propensity to work in such companies. This contrasts to the effect of gender: women reliably judge both investment and employment in morally controversial companies more harshly than men do. We suggest that an aversion towards morally controversial companies might be a partial determinant of the gender gap in wages.

From the Discussion section

There are several insights from our work. Firstly, we investigate laypeople instead of people of high social status, such as CEOs, members of congress, or judges. This would be consequential if parental investment in sons and daughters might depend on the social status of the parent. Studying laypeople makes our findings more relevant to the general population, and to more common decisions (e.g., concerning what mutual funds to invest in). Secondly, our models are aimed at directly testing whether the effect of parenting daughters is different across men and women. This would be expected from the female socialization hypothesis: parenting daughters might make the preferences of men more similar to those exhibited by women, as it would help them adopt alternative perspectives on issues in which the opinions of men and women might differ. Yet, they would not cause a shift in the preferences of women, as they have the same gender as their daughters. Our findings show that parenting daughters leads to harsher evaluations of morally controversial investments, but only in men. In fact, women parenting a daughter judge morally controversial investments more favorably than women without daughters, a somewhat unexpected finding.

Our results showed a boundary condition of the daughter effect. In our case, a full conceptual replication of the findings of Cronqvist and Yu would translate into a more negative view of morally controversial companies as investment propositions, and a lower willingness to be employed in such companies (at a significant premium). We observed the daughter effect in the former, but not in the latter decision. This is noteworthy, considering that the gender effect was of similar strength in Study 1 (that concerned investment) and Study 2 (that concerned employment). In short, gender differences are robust to the factors that affect the daughter effect, but these are yet to be discovered. We need to point out that we are not the first to show no clear support for the daughter effect; however, see for a methodological comment on that particular finding). Moreover, in one study, Dahl and colleagues showed that the birth of a child (even daughters, if the first-born child was not female) makes male CEOs less generous to employees.

Monday, January 10, 2022

Sequential decision-making impacts moral judgment: How iterative dilemmas can expand our perspective on sacrificial harm

D.H. Bostyn and A.Roets
Journal of Experimental Social Psychology
Volume 98, January 2022, 104244

Abstract

When are sacrificial harms morally appropriate? Traditionally, research within moral psychology has investigated this issue by asking participants to render moral judgments on batteries of single-shot, sacrificial dilemmas. Each of these dilemmas has its own set of targets and describes a situation independent from those described in the other dilemmas. Every decision that participants are asked to make thus takes place within its own, separate moral universe. As a result, people's moral judgments can only be influenced by what happens within that specific dilemma situation. This research methodology ignores that moral judgments are interdependent and that people might try to balance multiple moral concerns across multiple decisions. In the present series of studies we present participants with iterative versions of sacrificial dilemmas that involve the same set of targets across multiple iterations. Using this novel approach, and across five preregistered studies (total n = 1890), we provide clear evidence that a) responding to dilemmas in a sequential, iterative manner impacts the type of moral judgments that participants favor and b) that participants' moral judgments are not only motivated by the desire to refrain from harming others (usually labelled as deontological judgment), or a desire to minimize harms (utilitarian judgment), but also by a desire to spread out harm across all possible targets.

Highlights

• Research on sacrificial harm usually asks participants to judge single-shot dilemmas.

• We investigate sacrificial moral dilemma judgment in an iterative context.

• Sequential decision making impacts moral preferences.

• Many participants express a non-utilitarian concern for the overall spread of harm.


Moral deliberation in iterative contexts

The iterative lens we have adopted prompts some intriguing questions about the nature of moral deliberation in the context of sacrificial harm. Existing theoretical models on sacrificial harm can be described as ‘competition models’ (for instance, Conway & Gawronski, 2013; Gawronski et al., 2017; Greene et al., 2001, 2004; Hennig & Hütter, 2020). These models argue that opposing psychological processes compete to deliver a specific moral judgment and that the process that wins out, will determine the nature of that moral judgment. As such, these models presume that the goal of moral deliberation is about deciding whether to refrain from harm or minimize harm in a mutually exclusive manner. Even if participants are tempted by both options, eventually, their judgment settles wholly on one or the other. This is sensible in the context of non-iterative dilemmas in which outcomes hinge on a single decision but is it equally sensible in iterative contexts?

Consider the results of Study 4. In this study, we asked (a subset of) participants how many shocks they would divert out of a total six shocks. Interestingly, 32% of these participants decided to divert a single shock out of the six (See Fig. 6), thus shocking the individual once, and the group five times. How should such a decision be interpreted? These participants did not fully refrain from harming others, nor did they fully minimize harm, nor did they spread harm in the most balanced of ways.  Responses like this seem to straddle different moral concerns. While future research will need to corroborate these findings, we suggest that responses like this, i.e. responses that seem to straddle multiple moral concerns, cannot be explained by competition models but necessitate theoretical models that explicitly take into account that participants might strive to strike a (idiosyncratic) pluralistic balance between multiple moral concerns. 

Sunday, January 2, 2022

Towards a Theory of Justice for Artificial Intelligence

Iason Gabriel
Forthcoming in Daedelus vol. 151, 
no. 2, Spring 2022

Abstract 

This paper explores the relationship between artificial intelligence and principles of distributive justice. Drawing upon the political philosophy of John Rawls, it holds that the basic structure of society should be understood as a composite of socio-technical systems, and that the operation of these systems is increasingly shaped and influenced by AI. As a consequence, egalitarian norms of justice apply to the technology when it is deployed in these contexts. These norms entail that the relevant AI systems must meet a certain standard of public justification, support citizens rights, and promote substantively fair outcomes -- something that requires specific attention be paid to the impact they have on the worst-off members of society.

Here is the conclusion:

Second, the demand for public justification in the context of AI deployment may well extend beyond the basic structure. As Langdon Winner argues, when the impact of a technology is sufficiently great, this fact is, by itself, sufficient to generate a free-standing requirement that citizens be consulted and given an opportunity to influence decisions.  Absent such a right, citizens would cede too much control over the future to private actors – something that sits in tension with the idea that they are free and equal. Against this claim, it might be objected that it extends the domain of political justification too far – in a way that risks crowding out room for private experimentation, exploration, and the development of projects by citizens and organizations. However, the objection rests upon the mistaken view that autonomy is promoted by restricting the scope of justificatory practices to as narrow a subject matter as possible. In reality this is not the case: what matters for individual liberty is that practices that have the potential to interfere with this freedom are appropriately regulated so that infractions do not come about. Understood in this way, the demand for public justification stands in opposition not to personal freedom but to forms of unjust imposition.

The call for justice in the context of AI is well-founded. Looked at through the lens of distributive justice, key principles that govern the fair organization of our social, political and economic institutions, also apply to AI systems that are embedded in these practices. One major consequence of this is that liberal and egalitarian norms of justice apply to AI tools and services across a range of contexts. When they are integrated into society’s basic structure, these technologies should support citizens’ basic liberties, promote fair equality of opportunity, and provide the greatest benefit to those who are worst-off. Moreover, deployments of AI outside of the basic structure must still be compatible with the institutions and values that justice requires. There will always be valid reasons, therefore, to consider the relationship of technology to justice when it comes to the deployment of AI systems.

Thursday, December 30, 2021

When Helping Is Risky: The Behavioral and Neurobiological Trade-off of Social and Risk Preferences

Gross, J., Faber, N. S., et al.  (2021).
Psychological Science, 32(11), 1842–1855.
https://doi.org/10.1177/09567976211015942

Abstract

Helping other people can entail risks for the helper. For example, when treating infectious patients, medical volunteers risk their own health. In such situations, decisions to help should depend on the individual’s valuation of others’ well-being (social preferences) and the degree of personal risk the individual finds acceptable (risk preferences). We investigated how these distinct preferences are psychologically and neurobiologically integrated when helping is risky. We used incentivized decision-making tasks (Study 1; N = 292 adults) and manipulated dopamine and norepinephrine levels in the brain by administering methylphenidate, atomoxetine, or a placebo (Study 2; N = 154 adults). We found that social and risk preferences are independent drivers of risky helping. Methylphenidate increased risky helping by selectively altering risk preferences rather than social preferences. Atomoxetine influenced neither risk preferences nor social preferences and did not affect risky helping. This suggests that methylphenidate-altered dopamine concentrations affect helping decisions that entail a risk to the helper.

From the Discussion

From a practical perspective, both methylphenidate (sold under the trade name Ritalin) and atomoxetine (sold under the trade name Strattera) are prescription drugs used to treat attention-deficit/hyperactivity disorder and are regularly used off-label by people who aim to enhance their cognitive performance (Maier et al., 2018). Thus, our results have implications for the ethics of and policy for the use of psychostimulants. Indeed, the Global Drug Survey taken in 2015 and 2017 revealed that 3.2% and 6.6% of respondents, respectively, reported using psychostimulants such as methylphenidate for cognitive enhancement (Maier et al., 2018). Both in the professional ethical debate as well as in the general public, concerns about the medical safety and the fairness of such cognitive enhancements are discussed (Faber et al., 2016). However, our finding that methylphenidate alters helping behavior through increased risk seeking demonstrates that substances aimed at changing cognitive functioning can also influence social behavior. Such “social” side effects of cognitive enhancement (whether deemed positive or negative) are currently unknown to both users and administrators and thus do not receive much attention in the societal debate about psychostimulant use (Faulmüller et al., 2013).

Wednesday, December 29, 2021

Delphi: Towards Machine Ethics and Norms

Jiang, L., et al. (2021). 
ArXiv, abs/2110.07574.

What would it take to teach a machine to behave ethically? While broad ethical rules may seem straightforward to state ("thou shalt not kill"), applying such rules to real-world situations is far more complex. For example, while "helping a friend" is generally a good thing to do, "helping a friend spread fake news" is not. We identify four underlying challenges towards machine ethics and norms: (1) an understanding of moral precepts and social norms; (2) the ability to perceive real-world situations visually or by reading natural language descriptions; (3) commonsense reasoning to anticipate the outcome of alternative actions in different contexts; (4) most importantly, the ability to make ethical judgments given the interplay between competing values and their grounding in different contexts (e.g., the right to freedom of expression vs. preventing the spread of fake news).

Our paper begins to address these questions within the deep learning paradigm. Our prototype model, Delphi, demonstrates strong promise of language-based commonsense moral reasoning, with up to 92.1% accuracy vetted by humans. This is in stark contrast to the zero-shot performance of GPT-3 of 52.3%, which suggests that massive scale alone does not endow pre-trained neural language models with human values. Thus, we present Commonsense Norm Bank, a moral textbook customized for machines, which compiles 1.7M examples of people's ethical judgments on a broad spectrum of everyday situations. In addition to the new resources and baseline performances for future research, our study provides new insights that lead to several important open research questions: differentiating between universal human values and personal values, modeling different moral frameworks, and explainable, consistent approaches to machine ethics.

From the Conclusion

Delphi’s impressive performance on machine moral reasoning under diverse compositional real-life situations, highlights the importance of developing high-quality human-annotated datasets for people’s moral judgments. Finally, we demonstrate through systematic probing that Delphi still struggles with situations dependent on time or diverse cultures, and situations with social and demographic bias implications. We discuss the capabilities and limitations of Delphi throughout this paper and identify key directions in machine ethics for future work. We hope that our work opens up important avenues for future research in the emerging field of machine ethics, and we encourage collective efforts from our research community to tackle these research challenges.