Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Decision-making. Show all posts
Showing posts with label Decision-making. Show all posts

Wednesday, June 21, 2023

3 Strategies for Making Better, More Informed Decisions

Francesca Gina
Harvard Business Review
Originally published 25 May 23

Here is an excerpt:

Think counterfactually about previous decisions you’ve made.

Counterfactual thinking invites you to consider different courses of action you could have taken to gain a better understanding of the factors that influenced your choice. For example, if you missed a big deadline on a work project, you might reflect on how working harder, asking for help, or renegotiating the deadline could have affected the outcome. This reflection can help you recognize which factors played a significant role in your decision-making process — for example, valuing getting the project done on your own versus getting it done on time — and identify changes you might want to make when it comes to future decisions.

The 1998 movie Sliding Doors offers a great example of how counterfactual thinking can help us understand the forces that shape our decisions. The film explores two alternate storylines for the main character, Helen (played by Gwyneth Paltrow), based on whether she catches an upcoming subway train or misses it. While watching both storylines unfold, we gain insight into different factors that influence Helen’s life choices.

Similarly, engaging in counterfactual thinking can help you think through choices you’ve made by helping you expand your focus to consider multiple frames of reference beyond the present outcome. This type of reflection encourages you to take note of different perspectives and reach a more balanced view of your choices. By thinking counterfactually, you can ensure you are looking at existing data in a more unbiased way.

Challenge your assumptions.

You can also fight self-serving biases by actively seeking out information that challenges your beliefs and assumptions. This can be uncomfortable, as it could threaten your identity and worldview, but it’s a key step in developing a more nuanced and informed perspective.

One way to do this is to purposely expose yourself to different perspectives in order to broaden your understanding of an issue. Take Satya Nadella, the CEO of Microsoft. When he assumed the role in 2014, he recognized that the company’s focus on Windows and Office was limiting its growth potential. Not only did the company need a new strategy, he recognized that the culture needed to evolve as well.

In order to expand the company’s horizons, Nadella sought out talent from different backgrounds and industries, who brought with them a diverse range of perspectives. He also encouraged Microsoft employees to experiment and take risks, even if it meant failing along the way. By purposefully exposing himself and his team to different perspectives and new ideas, Nadella was able to transform Microsoft into a more innovative and customer-focused company, with a renewed focus on cloud computing and artificial intelligence.

Tuesday, June 20, 2023

Ethical Accident Algorithms for Autonomous Vehicles and the Trolley Problem: Three Philosophical Disputes

Sven Nyholm
In Lillehammer, H. (ed.), The Trolley Problem.
Cambridge: Cambridge University Press, 2023

Abstract

The Trolley Problem is one of the most intensively discussed and controversial puzzles in contemporary moral philosophy. Over the last half-century, it has also become something of a cultural phenomenon, having been the subject of scientific experiments, online polls, television programs, computer games, and several popular books. This volume offers newly written chapters on a range of topics including the formulation of the Trolley Problem and its standard variations; the evaluation of different forms of moral theory; the neuroscience and social psychology of moral behavior; and the application of thought experiments to moral dilemmas in real life. The chapters are written by leading experts on moral theory, applied philosophy, neuroscience, and social psychology, and include several authors who have set the terms of the ongoing debates. The volume will be valuable for students and scholars working on any aspect of the Trolley Problem and its intellectual significance.

Here is the conclusion:

Accordingly, it seems to me that just as the first methodological approach mentioned a few paragraphs above is problematic, so is the third methodological approach. In other words, we do best to take the second approach. We should neither rely too heavily (or indeed exclusively) on the comparison between the ethics of self-driving cars and the trolley problem, nor wholly ignore and pay no attention to the comparison between the ethics of self-driving cars and the trolley problem. Rather, we do best to make this one – but not the only – thing we do when we think about the ethics of self-driving cars. With what is still a relatively new issue for philosophical ethics to work with, and indeed also regarding older ethical issues that have been around much longer, using a mixed and pluralistic method that approaches the moral issues we are considering from many different angles is surely the best way to go. In this instance, that includes reflecting on – and reflecting critically on – how the ethics of crashes involving self-driving cars is both similar to and different from the philosophy of the trolley problem.

At this point, somebody might say, “what if I am somebody who really dislikes the self-driving cars/trolley problem comparison, and I would really prefer reflecting on the ethics of self-driving cars without spending any time on thinking about the similarities and differences between the ethics of self-driving cars and the trolley problem?” In other words, should everyone working on the ethics of self-driving cars spend at least some of their time reflecting on the comparison with the trolley problem? Luckily for those who are reluctant to spend any of their time reflecting on the self-driving cars/trolley problem comparison, there are others who are willing and able to devote at least some of their energies to this comparison.

In general, I think we should view the community that works on the ethics of this issue as being one in which there can be a division of labor, whereby different members of this field can partly focus on different things, and thereby together cover all of the different aspects that are relevant and important to investigate regarding the ethics of self-driving cars.  As it happens, there has been a remarkable variety in the methods and approaches people have used to address the ethics of self-driving cars (see Nyholm 2018 a-b).  So, while it is my own view that anybody who wants to form a complete overview of the ethics of self-driving cars should, among other things, devote some of their time to studying the comparison with the trolley problem, it is ultimately no big problem if not everyone wishes to do so. There are others who have been studying, and who will most likely continue to reflect on, this comparison.

Saturday, June 17, 2023

Debt Collectors Want To Use AI Chatbots To Hustle People For Money

Corin Faife
vice.com
Originally posted 18 MAY 23

Here are two excerpts:

The prospect of automated AI systems making phone calls to distressed people adds another dystopian element to an industry that has long targeted poor and marginalized people. Debt collection and enforcement is far more likely to occur in Black communities than white ones, and research has shown that predatory debt and interest rates exacerbate poverty by keeping people trapped in a never-ending cycle. 

In recent years, borrowers in the US have been piling on debt. In the fourth quarter of 2022, household debt rose to a record $16.9 trillion according to the New York Federal Reserve, accompanied by an increase in delinquency rates on larger debt obligations like mortgages and auto loans. Outstanding credit card balances are at record levels, too. The pandemic generated a huge boom in online spending, and besides traditional credit cards, younger spenders were also hooked by fintech startups pushing new finance products, like the extremely popular “buy now, pay later” model of Klarna, Sezzle, Quadpay and the like.

So debt is mounting, and with interest rates up, more and more people are missing payments. That means more outstanding debts being passed on to collection, giving the industry a chance to sprinkle some AI onto the age-old process of prodding, coaxing, and pressuring people to pay up.

For an insight into how this works, we need look no further than the sales copy of companies that make debt collection software. Here, products are described in a mix of generic corp-speak and dystopian portent: SmartAction, another conversational AI product like Skit, has a debt collection offering that claims to help with “alleviating the negative feelings customers might experience with a human during an uncomfortable process”—because they’ll surely be more comfortable trying to negotiate payments with a robot instead. 

(cut)

“Striking the right balance between assertiveness and empathy is a significant challenge in debt collection,” the company writes in the blog post, which claims GPT-4 has the ability to be “firm and compassionate” with customers.

When algorithmic, dynamically optimized systems are applied to sensitive areas like credit and finance, there’s a real possibility that bias is being unknowingly introduced. A McKinsey report into digital collections strategies plainly suggests that AI can be used to identify and segment customers by risk profile—i.e. credit score plus whatever other data points the lender can factor in—and fine-tune contact techniques accordingly. 

Thursday, May 25, 2023

Unselfish traits and social decision-making patterns characterize six populations of real-world extraordinary altruists

Rhoads, S. A., Vekaria, K. M. et al. (2023). 
Nature Communications
Published online 31 March 23

Abstract

Acts of extraordinary, costly altruism, in which significant risks or costs are assumed to benefit strangers, have long represented a motivational puzzle. But the features that consistently distinguish individuals who engage in such acts have not been identified. We assess six groups of real-world extraordinary altruists who had performed costly or risky and normatively rare (<0.00005% per capita) altruistic acts: heroic rescues, non-directed and directed kidney donations, liver donations, marrow or hematopoietic stem cell donations, and humanitarian aid work. Here, we show that the features that best distinguish altruists from controls are traits and decision-making patterns indicating unusually high valuation of others’ outcomes: high Honesty-Humility, reduced Social Discounting, and reduced Personal Distress. Two independent samples of adults who were asked what traits would characterize altruists failed to predict this pattern. These findings suggest that theories regarding self- focused motivations for altruism (e.g., self-enhancing reciprocity, reputation enhancement) alone are insufficient explanations for acts of real-world self- sacrifice.

From the Discussion Section

That extraordinary altruists are consistently distinguished by a common set of traits linked to unselfishness is particularly noteworthy given the differences in the demographics of the various altruistic groups we sampled and the differences in the forms of altruism they have engaged in—from acts of physical heroism to the decision to donate bone marrow. This finding replicates and extends findings from a previous study demonstrating that extraordinary altruists show heighted subjective valuation of socially distant others. In addition, our results are consistent with a recent meta-analysis of 770 studies finding a strong and consistent relationship between Honesty-Humility and various forms of self-reported and laboratory-measured prosociality. Coupled with findings that low levels of unselfish traits (e.g., low Honesty-Humility, high social discounting) correspond to exploitative and antisocial behaviors such as cheating and aggression, these results also lend support to the notion of a bipolar caring continuum along which individuals vary in the degree to which they subjectively value (care about) the welfare of others. They further suggest altruism—arguably the willingness to be voluntarily “exploited” by others—to be the opposite of phenotypes like psychopathy that are characterized by exploiting others. These traits may best predict behavior in novel contexts lacking strong norms, particularly when decisions are made rapidly and intuitively. Notably, people who are higher in prosociality are more likely to participate in psychological research to begin with—thus the observed differences between altruists and controls may be underestimates (i.e., population-level differences may be larger).

Sunday, April 30, 2023

The secrets of cooperation

Bob Holmes
Knowablemagazine.org
Originally published 29 MAR 23

Here are two excerpts:

Human cooperation takes some explaining — after all, people who act cooperatively should be vulnerable to exploitation by others. Yet in societies around the world, people cooperate to their mutual benefit. Scientists are making headway in understanding the conditions that foster cooperation, research that seems essential as an interconnected world grapples with climate change, partisan politics and more — problems that can be addressed only through large-scale cooperation.

Behavioral scientists’ formal definition of cooperation involves paying a personal cost (for example, contributing to charity) to gain a collective benefit (a social safety net). But freeloaders enjoy the same benefit without paying the cost, so all else being equal, freeloading should be an individual’s best choice — and, therefore, we should all be freeloaders eventually.

Many millennia of evolution acting on both our genes and our cultural practices have equipped people with ways of getting past that obstacle, says Muthukrishna, who coauthored a look at the evolution of cooperation in the 2021 Annual Review of Psychology. This cultural-genetic coevolution stacked the deck in human society so that cooperation became the smart move rather than a sucker’s choice. Over thousands of years, that has allowed us to live in villages, towns and cities; work together to build farms, railroads and other communal projects; and develop educational systems and governments.

Evolution has enabled all this by shaping us to value the unwritten rules of society, to feel outrage when someone else breaks those rules and, crucially, to care what others think about us.

“Over the long haul, human psychology has been modified so that we’re able to feel emotions that make us identify with the goals of social groups,” says Rob Boyd, an evolutionary anthropologist at the Institute for Human Origins at Arizona State University.

(cut)

Reputation is more powerful than financial incentives in encouraging cooperation

Almost a decade ago, Yoeli and his colleagues trawled through the published literature to see what worked and what didn’t at encouraging prosocial behavior. Financial incentives such as contribution-matching or cash, or rewards for participating, such as offering T-shirts for blood donors, sometimes worked and sometimes didn’t, they found. In contrast, reputational rewards — making individuals’ cooperative behavior public — consistently boosted participation. The result has held up in the years since. “If anything, the results are stronger,” says Yoeli.

Financial rewards will work if you pay people enough, Yoeli notes — but the cost of such incentives could be prohibitive. One study of 782 German residents, for example, surveyed whether paying people to receive a Covid vaccine would increase vaccine uptake. It did, but researchers found that boosting vaccination rates significantly would have required a payment of at least 3,250 euros — a dauntingly steep price.

And payoffs can actually diminish the reputational rewards people could otherwise gain for cooperative behavior, because others may be unsure whether the person was acting out of altruism or just doing it for the money. “Financial rewards kind of muddy the water about people’s motivations,” says Yoeli. “That undermines any reputational benefit from doing the deed.”

Tuesday, April 25, 2023

Responsible Agency and the Importance of Moral Audience

Jefferson, A., Sifferd, K. 
Ethic Theory Moral Prac (2023).
https://doi.org/10.1007/s10677-023-10385-1

Abstract

Ecological accounts of responsible agency claim that moral feedback is essential to the reasons-responsiveness of agents. In this paper, we discuss McGeer’s scaffolded reasons-responsiveness account in the light of two concerns. The first is that some agents may be less attuned to feedback from their social environment but are nevertheless morally responsible agents – for example, autistic people. The second is that moral audiences can actually work to undermine reasons-responsiveness if they espouse the wrong values. We argue that McGeer’s account can be modified to handle both problems. Once we understand the specific roles that moral feedback plays for recognizing and acting on moral reasons, we can see that autistics frequently do rely on such feedback, although it often needs to be more explicit. Furthermore, although McGeer is correct to highlight the importance of moral feedback, audience sensitivity is not all that matters to reasons-responsiveness; it needs to be tempered by a consistent application of moral rules. Agents also need to make sure that they choose their moral audiences carefully, paying special attention to receiving feedback from audiences which may be adversely affected by their actions.

Conclusions

In this paper we raised two challenges to McGeer’s scaffolded reasons-responsiveness account: agents who are less attuned to social feedback such as autistics, and corrupting moral audiences. We found that, once we parsed the two roles that feedback from a moral audience play, autistics provide reasons to revise the scaffolded reasons-responsiveness account. We argued that autistic persons, like neurotypicals, wish to justify their behaviour to a moral audience and rely on their moral audience for feedback. However, autistic persons may need more explicit feedback when it comes to effects their behaviour has on others. They also compensate for difficulties they have in receiving information from the moral audience by justifying action through appeal to moral rules. This shows that McGeer’s view of moral agency needs to include observance of moral rules as a way of reducing reliance on audience feedback. We suspect that McGeer would approve of this proposal, as she mentions that an instance of blame can lead to vocal protest by the target, and a possible renegotiation of norms and rules for what constitutes acceptable behaviour (2019). Consideration of corrupting audiences highlights a different problem from that of resisting blame and renegotiating norms. It draws attention to cases where individual agents must try to go beyond what is accepted in their moral environment, a significant challenge for social beings who rely strongly on moral audiences in developing and calibrating their moral reasons-responsiveness. Resistance to a moral audience requires the capacity to evaluate the action differently; often this will be with reference to a moral rule or principle.

For both neurotypical and autistic individuals, consistent application of moral rules or principles can reinforce and bring back to mind important moral commitments when we are led astray by our own desires or specific (im)moral audiences. But moral audiences still play a crucial role to developing and maintaining reasons-responsiveness. First, they are essential to the development and maintenance of all agents’ moral sensitivity. Second, they can provide an important moral corrective where people may have moral blindspots, especially when they provide insights into ways in which a person has fallen short morally by not taking on board reasons that are not obvious to them. Often, these can be reasons which pertain to the respectful treatment of others who are in some important way different from that person.


In sum: Be responsible and accountable in your actions, as your moral audience is always watching. Doing the right thing matters not just for your reputation, but for the greater good. #ResponsibleAgency #MoralAudience

Monday, April 24, 2023

ChatGPT in the Clinic? Medical AI Needs Ethicists

Emma Bedor Hiland
The Hastings Center
Originally published by 10 MAR 23

Concerns about the role of artificial intelligence in our lives, particularly if it will help us or harm us, improve our health and well-being or work to our detriment, are far from new. Whether 2001: A Space Odyssey’s HAL colored our earliest perceptions of AI, or the much more recent M3GAN, these questions are not unique to the contemporary era, as even the ancient Greeks wondered what it would be like to live alongside machines.

Unlike ancient times, today AI’s presence in health and medicine is not only accepted, it is also normative. Some of us rely upon FitBits or phone apps to track our daily steps and prompt us when to move or walk more throughout our day. Others utilize chatbots available via apps or online platforms that claim to improve user mental health, offering meditation or cognitive behavioral therapy. Medical professionals are also open to working with AI, particularly when it improves patient outcomes. Now the availability of sophisticated chatbots powered by programs such as OpenAI’s ChatGPT have brought us closer to the possibility of AI becoming a primary source in providing medical diagnoses and treatment plans.

Excitement about ChatGPT was the subject of much media attention in late 2022 and early 2023. Many in the health and medical fields were also eager to assess the AI’s abilities and applicability to their work. One study found ChatGPT adept at providing accurate diagnoses and triage recommendations. Others in medicine were quick to jump on its ability to complete administrative paperwork on their behalf. Other research found that ChatGPT reached, or came close to reaching, the passing threshold for United States Medical Licensing Exam.

Yet the public at large is not as excited about an AI-dominated medical future. A study from the Pew Research Center found that most Americans are “uncomfortable” with the prospect of AI-provided medical care. The data also showed widespread agreement that AI will negatively affect patient-provider relationships, and that the public is concerned health care providers will adopt AI technologies too quickly, before they fully understanding the risks of doing so.


In sum: As AI is increasingly used in healthcare, this article argues that there is a need for ethical considerations and expertise to ensure that these systems are designed and used in a responsible and beneficial manner. Ethicists can play a vital role in evaluating and addressing the ethical implications of medical AI, particularly in areas such as bias, transparency, and privacy.

Saturday, April 22, 2023

A Psychologist Explains How AI and Algorithms Are Changing Our Lives

Danny Lewis
The Wall Street Journal
Originally posted 21 MAR 23

In an age of ChatGPT, computer algorithms and artificial intelligence are increasingly embedded in our lives, choosing the content we’re shown online, suggesting the music we hear and answering our questions.

These algorithms may be changing our world and behavior in ways we don’t fully understand, says psychologist and behavioral scientist Gerd Gigerenzer, the director of the Harding Center for Risk Literacy at the University of Potsdam in Germany. Previously director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development, he has conducted research over decades that has helped shape understanding of how people make choices when faced with uncertainty. 

In his latest book, “How to Stay Smart in a Smart World,” Dr. Gigerenzer looks at how algorithms are shaping our future—and why it is important to remember they aren’t human. He spoke with the Journal for The Future of Everything podcast.

The term algorithm is thrown around so much these days. What are we talking about when we talk about algorithms?

It is a huge thing, and therefore it is important to distinguish what we are talking about. One of the insights in my research at the Max Planck Institute is that if you have a situation that is stable and well defined, then complex algorithms such as deep neural networks are certainly better than human performance. Examples are [the games] chess and Go, which are stable. But if you have a problem that is not stable—for instance, you want to predict a virus, like a coronavirus—then keep your hands off complex algorithms. [Dealing with] the uncertainty—that is more how the human mind works, to identify the one or two important cues and ignore the rest. In that type of ill-defined problem, complex algorithms don’t work well. I call this the “stable world principle,” and it helps you as a first clue about what AI can do. It also tells you that, in order to get the most out of AI, we have to make the world more predictable.

So after all these decades of computer science, are algorithms really just still calculators at the end of the day, running more and more complex equations?

What else would they be? A deep neural network has many, many layers, but they are still calculating machines. They can do much more than ever before with the help of video technology. They can paint, they can construct text. But that doesn’t mean that they understand text in the sense humans do.

Sunday, March 19, 2023

The role of attention in decision-making under risk in gambling disorder: an eye-tracking study

Hoven, M., Hirmas, A., Engelmann, J. B., 
& van Holst, R. (2022, June 30).
https://doi.org/10.31234/osf.io/fxd3m

Abstract

Gambling disorder (GD) is a behavioural addiction characterized by impairments in decision-making, favouring risk- and reward-prone choices. One explanatory factor for this behaviour is a deviation in attentional processes, as increasing evidence indicates that GD patients show an attentional bias toward gambling stimuli. However, previous attentional studies have not directly investigated attention during risky decision-making. 25 patients with GD and 27 healthy matched controls (HC) completed a mixed gambles task combined with eye-tracking to investigate attentional biases for potential gains versus losses during decision-making under risk. Results indicate that compared to HC, GD patients gambled more and were less loss averse. GD patients did not show a direct attentional bias towards gains (or relative to losses). Using a recent (neuro)economics model that considers average attention and trial-wise deviations in average attention, we conducted fine-grained exploratory analyses of the attentional data. Results indicate that the average attention in GD patients moderated the effect of gain value on gambling choices, whereas this was not the case for HC. GD patients with high average attention for gains started gambling at less high gain values. A similar trend-level effect was found for losses, where GD patients with high average attention for losses stopped gambling with lower loss values. This study gives more insight into how attentional processes in GD play a role in gambling behaviour, which could have implications for the development of future treatments focusing on attentional training or for the development of interventions that increase the salience of losses.

From the Discussion section

We extend the current literature by investigating the role of attention in risky decision-making using eye-tracking, which has been underexplored in GD thus far. Consistent with previous studies in HCs, subjects’ overall relative attention toward gains decreased in favor of attention toward losses when  loss  values  increased.  We  did not find group differences in attention to either  gains or losses, suggesting no direct attentional biases in GD. However, while HCs increased their attention to gains with higher gain values, patients with GD did not. Moreover, while patients with GD displayed lower loss aversion, they did not show less attention to losses, rather, in both groups, increased trial-by-trial attention to losses resulted in less gambling.

The question arises whether attention modulates the effect of gains and losses on choice behavior differently in GD relative to controls. Our exploratory analyses that differentiated between two different channels of attention indeed indicated that the effect of gain value on gambling choices was modulated by the amount of average attention on gains in GD only. In other words, patients with GD who focused more on gains exhibited a greater gambling propensity at relatively low gain values. Notably, the strength of the effect of gain value on choice only significantly differed at average and high levels of attention to gains between groups, while patients with GD and HCs with relatively low levels of average attention to gains did not differ. Moreover, patients with GD who had relatively more average attention to losses showed a reduction in gambling propensity at relatively lower loss values, but note that this was at trend level.  Since  average  attention  relates  to  goal-directed or top-down attention, this measure likely reflects one’s preferences and beliefs.  Hence,  the  current  results suggest  that  gambling  choices  in  patients  with GD, relative to HCs are more  influenced by their preferences for gains. Future studies are needed to verify if and how top-down attentional processes affect decision-making in GD.


Editor's note: Apparently, GD focusing primarily on gains continue to gamble.  GD and HC who focus on losses are more likely to stop.  Therefore, psychologists treating people with impulse control difficulties may want to help patient's focus on potential losses/harm, as opposed to imagined gains.

Sunday, March 5, 2023

Four Recommendations for Ethical AI in Healthcare

Lindsey Jarrett
Center for Practical Bioethics

For several decades now, we have been having conversations about the impact that technology, from the voyage into space to the devices in our pockets, will have on society. The force with which technology alters our lives at times feels abrupt. It has us feeling excited one day and fearful the next.

If your experiences in life are not dependent on the use of technology — especially if your work still allows for you to disconnect from the virtual world – it may feel like technology is working at a decent pace. However, many of us require some sort of technology to work, to communicate with others, to develop relationships, and to disseminate ideas into the world. Further, we also increasingly need technology to help us make decisions. These decisions vary in complexity from auto-correcting our messages to connecting to someone on a dating app, and without access to a piece of technology, it is increasingly challenging to rely on anything but technology.

Is the use of technology for decision making a problem in and of itself due to its entrenched use across our lives, or are there particular components and contexts that need attention? Your answer may depend on what you want to use it for, how you want others to use it to know you, and why the technology is needed over other tools. These considerations are widely discussed in the areas of criminal justice, finance, security, hiring practices, and conversations are developing in other sectors as issues of inequity, injustice and power differentials begin to emerge.

Issues emerging in the healthcare sector is of particular interest to many, especially since the coronavirus pandemic. As these conversations unfold, people start to unpack the various dilemmas that exist within the intersection of technology and healthcare. Scholars have engaged in theoretical rhetoric to examine ethical implications, researchers have worked to evaluate the decision-making processes of data scientists who build clinical algorithms, and healthcare executives have tried to stay ahead of regulation that is looming over their hospital systems.

However, recommendations tend to focus exclusively on those involved with algorithm creation and offer little support to other stakeholders across the healthcare industry. While this guidance turns into practice across data science teams building algorithms, especially those building machine learning based tools, the Ethical AI Initiative sees opportunities to examine decisions that are made regarding these tools before they get to a data scientist’s queue and after they are ready for production. These opportunities are where systemic change can occur, and without that level of change, we will continue to build products to put on the shelf and more products to fill the shelf when those fail.

Healthcare is not unique in facing these types of challenges, and I will outline a few recommendations on how an adapted, augmented system of healthcare technology can operate, as the industry prepares for more forceful regulation of the use of machine learning-based tools in healthcare practice.

Wednesday, March 1, 2023

Cognitive Control Promotes Either Honesty or Dishonesty, Depending on One's Moral Default

Speer, S. P., Smidts, A., & Boksem, M. A. S. (2021).
The Journal of Neuroscience, 41(42), 8815–8825. 
https://doi.org/10.1523/jneurosci.0666-21.2021

Abstract

Cognitive control is crucially involved in making (dis)honest decisions. However, the precise nature of this role has been hotly debated. Is honesty an intuitive response, or is will power needed to override an intuitive inclination to cheat? A reconciliation of these conflicting views proposes that cognitive control enables dishonest participants to be honest, whereas it allows those who are generally honest to cheat. Thus, cognitive control does not promote (dis)honesty per se; it depends on one's moral default. In the present study, we tested this proposal using electroencephalograms in humans (males and females) in combination with an independent localizer (Stroop task) to mitigate the problem of reverse inference. Our analysis revealed that the neural signature evoked by cognitive control demands in the Stroop task can be used to estimate (dis)honest choices in an independent cheating task, providing converging evidence that cognitive control can indeed help honest participants to cheat, whereas it facilitates honesty for cheaters.

Significance Statement

Dishonesty causes enormous economic losses. To target dishonesty with interventions, a rigorous understanding of the underlying cognitive mechanisms is required. A recent study found that cognitive control enables honest participants to cheat, whereas it helps cheaters to be honest. However, it is evident that a single study does not suffice as support for a novel hypothesis. Therefore, we tested the replicability of this finding using a different modality (EEG instead of fMRI) together with an independent localizer task to avoid reverse inference. We find that the same neural signature evoked by cognitive control demands in the localizer task can be used to estimate (dis)honesty in an independent cheating task, establishing converging evidence that the effect of cognitive control indeed depends on a person's moral default.

From the Discussion section

Previous research has deduced the involvement of cognitive control in moral decision-making through relating observed activations to those observed for cognitive control tasks in prior studies (Greene and Paxton, 2009; Abe and Greene, 2014) or with the help of meta-analytic evidence (Speer et al., 2020) from the Neurosynth platform (Yarkoni et al., 2011). This approach, which relies on reverse inference, must be used with caution because any given brain area may be involved in several different cognitive processes, which makes it difficult to conclude that activation observed in a particular brain area represents one specific function (Poldrack, 2006). Here, we extend prior research by providing more rigorous evidence by means of explicitly eliciting cognitive control in a separate localizer task and then demonstrating that this same neural signature can be identified in the Spot-The-Difference task when participants are exposed to the opportunity to cheat. Moreover, using similarity analysis we provide a direct link between the neural signature of cognitive control, as elicited by the Stroop task, and (dis)honesty by showing that time-frequency patterns of cognitive control demands in the Stroop task are indeed similar to those observed when tempted to cheat in the Spot-The-Difference task. These results provide strong evidence that cognitive control processes are recruited when individuals are tempted to cheat.

Saturday, February 25, 2023

Five Steps to Get Students Thinking About Ethics

Karen Cotter, Laura Bond, & Lauren Fullmer
The Greater Good Science Center
Originally posted 22 FEB 23

Here is an excerpt and the 5 steps:

Five steps for ethical decision-making

Teaching ethical thinking aligns with the mission you may have as an educator to promote global citizenship. “Being a global citizen means understanding that global ideas and solutions must still fit the complexities of local contexts and cultures, and meet each community’s specific needs and capacities,” explains AFS-USA. While investigating real-world problems from many perspectives, students gain an appreciation for many sides of an issue and avoid the pitfall of simply reinforcing their preexisting attitudes.

Ethical thinking also enriches social-emotional learning. According to researchers Michael D. Burroughs and Nikolaus J. Barkauskas, “By focusing on social, emotional, and ethical literacy in schools educators can contribute to the development of persons with greater self-awareness, emotional understanding and, in turn, the capability to act ethically and successfully interact with others in a democratic society.” The five steps below serve as a seamless way to integrate ethical decision making into a science or STEM class.

These steps come from our Prosocial Design Process for Ethical Decision-Making, which itself is a synthesis of three frameworks: prosocial education (which focuses on promoting emotional, social, moral, and civic capacities that express character in students), the Engineering Design Process (an open-ended problem-solving practice that encourages growth from failure), and the IDEA Ethical Decision-Making Framework. This process offers a way for students to come up with creative solutions to a problem and bring ethical consideration to global issues.

1. Ask questions to identify the issue.
2. Consider the perspectives of people impacted to brainstorm solutions. 
3. Analyze research to design and test solutions. 
4. Evaluate and iterate for an ethically justifiable solution.
5. Communicate findings to all relevant stakeholders. 

(cut)

This ethical framework guides students to think beyond themselves to identify solutions that impact their community. The added SEL (social-emotional learning) benefits of self-reflection, social awareness, relationship skills, and appreciation of the world around them awaken students’ consciousness of core ethical values, equipping them to make decisions for the greater good. Using prosocial science topics like climate change empowers students to engage in relevant, real-world content to create a more equitable, sustainable, and just world where they experience how their humanity can impact the greater good.

Tuesday, February 21, 2023

Motornomativity: How Social Norms Hide a Major Public Health Hazard

Walker, I., Tapp, A., & Davis, A.
(2022, December 14).
https://doi.org/10.31234/osf.io/egnmj

Abstract

Decisions about motor transport, by individuals and policy-makers, show unconscious biases due to cultural assumptions about the role of private cars - a phenomenon we term motonormativity. To explore this claim, a national sample of 2157 UK adults rated, at random, a set of statements about driving (“People shouldn't drive in highly populated areas where other people have to breathe in the car fumes”) or a parallel set of statements with key words changed to shift context ("People shouldn't smoke in highly populated areas where other people have to breathe in the cigarette fumes"). Such context changes could radically alter responses (75% agreed with "People shouldn't smoke... " but only 17% agreed with "People shouldn't drive... "). We discuss how these biases systematically distort medical and policy decisions and give recommendations for how public policy and health professionals might begin to recognise and address these unconscious biases in their work.

Discussion

Our survey showed that people can go from agreeing with a health or risk-related proposition to disagreeing with it simply depending on whether it is couched as a driving or non-driving issue. In the most dramatic case, survey respondents felt that obliging people to breathe toxic fumes went from being unacceptable to acceptable depending on whether the fumes came from cigarettes or motor vehicles. It is, objectively, nonsensical that the ethical and public health issues involved in forcing non-consenting people to inhale air-borne toxins should be judged differently depending on their source, but that is what happened here. It seems that normal judgement criteria can indeed be suspended in the specific context of motoring, as we suggested.

Obviously, we used questions in this study that we felt would stand a good chance of demonstrating a difference between how motoring and non-motoring issues were viewed. But choosing questions likely to reveal differences is not the same thing as stacking the deck. We gave the social bias every chance to reveal itself, but that could only happen because it was out there to be revealed. Prentice and Miller (1992) argue that the ease with which a behavioural phenomenon can be triggered is an index of its true magnitude. The ease with which effects appeared in this study was striking: in the final question the UK public went from 17% agreement to 75% agreement just by changing two words in the question whilst leaving its underlying principle unchanged.


Another example of a culturally acceptable (or ingrained) bias for harm. Call it "car blindness" or "motornormativity."

Friday, February 17, 2023

Free Will Is Only an Illusion if You Are, Too

Alessandra Buccella and Tomáš Dominik
Scientific American
Originally posted January 16, 2023

Here is an excerpt:

In 2019 neuroscientists Uri Maoz, Liad Mudrik and their colleagues investigated that idea. They presented participants with a choice of two nonprofit organizations to which they could donate $1,000. People could indicate their preferred organization by pressing the left or right button. In some cases, participants knew that their choice mattered because the button would determine which organization would receive the full $1,000. In other cases, people knowingly made meaningless choices because they were told that both organizations would receive $500 regardless of their selection. The results were somewhat surprising. Meaningless choices were preceded by a readiness potential, just as in previous experiments. Meaningful choices were not, however. When we care about a decision and its outcome, our brain appears to behave differently than when a decision is arbitrary.

Even more interesting is the fact that ordinary people’s intuitions about free will and decision-making do not seem consistent with these findings. Some of our colleagues, including Maoz and neuroscientist Jake Gavenas, recently published the results of a large survey, with more than 600 respondents, in which they asked people to rate how “free” various choices made by others seemed. Their ratings suggested that people do not recognize that the brain may handle meaningful choices in a different way from more arbitrary or meaningless ones. People tend, in other words, to imagine all their choices—from which sock to put on first to where to spend a vacation—as equally “free,” even though neuroscience suggests otherwise.

What this tells us is that free will may exist, but it may not operate in the way we intuitively imagine. In the same vein, there is a second intuition that must be addressed to understand studies of volition. When experiments have found that brain activity, such as the readiness potential, precedes the conscious intention to act, some people have jumped to the conclusion that they are “not in charge.” They do not have free will, they reason, because they are somehow subject to their brain activity.

But that assumption misses a broader lesson from neuroscience. “We” are our brain. The combined research makes clear that human beings do have the power to make conscious choices. But that agency and accompanying sense of personal responsibility are not supernatural. They happen in the brain, regardless of whether scientists observe them as clearly as they do a readiness potential.

So there is no “ghost” inside the cerebral machine. But as researchers, we argue that this machinery is so complex, inscrutable and mysterious that popular concepts of “free will” or the “self” remain incredibly useful. They help us think through and imagine—albeit imperfectly—the workings of the mind and brain. As such, they can guide and inspire our investigations in profound ways—provided we continue to question and test these assumptions along the way.


Wednesday, January 11, 2023

How neurons, norms, and institutions shape group cooperation

Van Bavel, J. J., Pärnamets, P., Reinero, D. A., 
& Packer, D. (2022, April 7).
https://doi.org/10.1016/bs.aesp.2022.04.004

Abstract

Cooperation occurs at all stages of human life and is necessary for small groups and large-scale societies alike to emerge and thrive. This chapter bridges research in the fields of cognitive neuroscience, neuroeconomics, and social psychology to help understand group cooperation. We present a value-based framework for understanding cooperation, integrating neuroeconomic models of decision-making with psychological and situational variables involved in cooperative behavior, particularly in groups. According to our framework, the ventromedial prefrontal cortex serves as a neural integration hub for value computation during cooperative decisions, receiving inputs from various neuro-cognitive processes such as attention, affect, memory, and learning. We describe factors that directly or indirectly shape the value of cooperation decisions, including cultural contexts and social norms, personal and social identity, and intergroup relations. We also highlight the role of economic, social, and cultural institutions in shaping cooperative behavior. We discuss the implications for future research on cooperation.

(cut)

Social Institutions

Trust production is crucial for fostering cooperation (Zucker, 1986). We have already discussed two forms of trust production above: the trust and resulting cooperation that develops from experience with and knowledge about individuals, and trust based on social identities. The third form of trust production is institution-based, in which formal mechanisms or processes are used to foster trust (and that do not rely on personal characteristics, a history of exchange, or identity characteristics). At the societal level, trust-supporting institutions include governments, corporate structures, criminal and civil legal systems, contract law and property rights, insurance, and stock markets. When they function effectively, institutions allow for broader cooperation, helping people extend trust beyond other people they know or know of and, crucially, also beyond the boundaries of their in-groups (Fabbri, 2022; Hruschka & Henrich, 2013; Rothstein & Stolle, 2008; Zucker, 1986). Conversely, when these sorts of structures do not function well, “institutional distrust strips away a basic sense that one is protected from exploitation, thus reducing trust between strangers, which is at the core of functioning societies” (van Prooijen, Spadaro, & Wang, 2022).

When strangers with different cultural backgrounds have to interact, it often lacks the interpersonal or group-level trust necessary for cooperation. For instance, reliance on tightly-knit social networks, where everyone knows everyone, is often impossible in larger, more diverse environments. Communities can compensate by relying more on group-based trust. For example, banks may loan money primarily within separate kin or ethnic groups (Zucker, 1986). However, the disruption of homogeneous social networks, combined with the increasing need to cooperate across group boundaries creates incentives to develop and participate in broader sets of institutions. Institutions can facilitate cooperation and individuals prefer institutions that help regulate interactions and foster trust.

People often may seek to build institutions embodying principles, norms, rules, or procedures that foster group-based cooperation. In turn, these institutions shape decisions by altering the value people place oncooperative decisions. One study, for instance, examined these institutional and psychological dynamics over 30 rounds of a public goods game (Gürerk, Irlenbusch & Rockenbach, 2006). Every round had three stages. First, participants chose whether they wanted to play that round with or without a “sanctioning institution” that would provide a means of rewarding or punishing other players based on their behavior in the game. Second, they played the public goods game with (and onlywith) other participants whohad selected the same institutional structure for that round. After making their decisions (to contribute to the common pool), they then saw how much everyone else in their institutional context had contributed. Third, participants who had opted to play the round with a sanctioning institution could choose, for a price, to punish or reward other players.

Sunday, January 8, 2023

On second thoughts: changes of mind in decision-making

Stone, C., Mattingley, J. B., & Rangelov, D. (2022).
Trends in Cognitive Sciences, 26(5), 419–431.
https://doi.org/10.1016/j.tics.2022.02.004

Abstract

The ability to change initial decisions in the face of new or potentially conflicting information is fundamental to adaptive behavior. From perceptual tasks to multiple-choice tests, research has shown that changes of mind often improve task performance by correcting initial errors. Decision makers must, however, strike a balance between improvements that might arise from changes of mind and potential energetic, temporal, and psychological costs. In this review, we provide an overview of the change-of-mind literature, focusing on key behavioral findings, computational mechanisms, and neural correlates. We propose a conceptual framework that comprises two core decision dimensions – time and evidence source – which link changes of mind across decision contexts, as a first step toward an integrated psychological account of changes of mind.

Highlights
  • Changes of mind are observed during decision-making across a range of decision contexts.
  • While changes of mind are relatively infrequent, they can serve to improve overall behavioral performance by correcting initial errors.
  • Despite often improving performance, changes of mind incur energetic and temporal costs which can bias decision makers into keeping their original responses.
  • Computational models of decision-making have demonstrated that changes of mind can result from continued evidence accumulation in the post-decisional period.
  • Brain regions involved in metacognitive monitoring and affective processing are instrumental for change-of-mind behavior.

Concluding remarks

Changes of mind have received less attention in the scientific literature than the decisions which precede them. Nevertheless, existing research reveals a wealth of compelling findings, supporting changes of mind as a topic worthy of further exploration. In this review, we have covered changes of mind from a behavioral, computational, and neural perspective, and have attempted to draw parallels between disparate lines of research. To this end, we have proposed a framework comprising core decision dimensions relevant to change-of-mind behavior which we hope will foster development of an integrated account. These dimensions conceptualize changes of mind as iterative, predominantly corrective behavioral updates in the face of newly arriving evidence.

The source of this evidence, and how it is integrated into behavior, depends upon both the decision context and stage. However, the mechanisms underlying changes of mind are not equally well understood across the entire decision space. While changes of mind for perceptual decisions involving accumulation of sensory evidence over short durations have been well characterized, much work is needed to extend these insights to the complex decisions we make in everyday life.


One conclusion, ignoring contradictory evidence can account for "confirmation bias".

Friday, December 2, 2022

Rational use of cognitive resources in human planning

Callaway, F., van Opheusden, B., Gul, S. et al. 
Nat Hum Behav 6, 1112–1125 (2022).
https://doi.org/10.1038/s41562-022-01332-8

Abstract

Making good decisions requires thinking ahead, but the huge number of actions and outcomes one could consider makes exhaustive planning infeasible for computationally constrained agents, such as humans. How people are nevertheless able to solve novel problems when their actions have long-reaching consequences is thus a long-standing question in cognitive science. To address this question, we propose a model of resource-constrained planning that allows us to derive optimal planning strategies. We find that previously proposed heuristics such as best-first search are near optimal under some circumstances but not others. In a mouse-tracking paradigm, we show that people adapt their planning strategies accordingly, planning in a manner that is broadly consistent with the optimal model but not with any single heuristic model. We also find systematic deviations from the optimal model that might result from additional cognitive constraints that are yet to be uncovered.

Discussion

In this paper, we proposed a rational model of resource-constrained planning and compared the predictions of the model to human behaviour in a process-tracing paradigm. Our results suggest that human planning strategies are highly adaptive in ways that previous models cannot capture. In Experiment 1, we found that the optimal planning strategy in a generic environment resembled best-first search with a relative stopping rule. Participant behaviour was also consistent with such a strategy. However, the optimal planning strategy depends on the structure of the environment. Thus, in Experiments 2 and 3, we constructed six environments in which the optimal strategy resembled different classical search algorithms (best-first, breadth-first, depth-first and backward search). In each case, participant behaviour matched the environment-appropriate algorithm, as the optimal model predicted.

The idea that people use heuristics that are jointly adapted to environmental structure and computational limitations is not new. First popularized by Herbert Simon, it has more recently been championed in ecological rationality, which generally takes the approach of identifying computationally frugal heuristics that make accurate choices in certain environments. However, while ecological rationality explicitly rejects the notion of optimality, our approach embraces it, identifying heuristics that maximize an objective function that includes both external utility and internal cognitive cost. Supporting our approach, we found that the optimal model explained human planning behaviour better than flexible combinations of previously proposed planning heuristics in seven of the eight environments we considered (Supplementary Table 1).

Monday, November 21, 2022

AI Isn’t Ready to Make Unsupervised Decisions

Joe McKendrick and Andy Thurai
Harvard Business Review
Originally published September 15, 2022

Artificial intelligence is designed to assist with decision-making when the data, parameters, and variables involved are beyond human comprehension. For the most part, AI systems make the right decisions given the constraints. However, AI notoriously fails in capturing or responding to intangible human factors that go into real-life decision-making — the ethical, moral, and other human considerations that guide the course of business, life, and society at large.

Consider the “trolley problem” — a hypothetical social scenario, formulated long before AI came into being, in which a decision has to be made whether to alter the route of an out-of-control streetcar heading towards a disaster zone. The decision that needs to be made — in a split second — is whether to switch from the original track where the streetcar may kill several people tied to the track, to an alternative track where, presumably, a single person would die.

While there are many other analogies that can be made about difficult decisions, the trolley problem is regarded to be the pinnacle exhibition of ethical and moral decision making. Can this be applied to AI systems to measure whether AI is ready for the real world, in which machines can think independently, and make the same ethical and moral decisions, that are justifiable, that humans would make?

Trolley problems in AI come in all shapes and sizes, and decisions don’t necessarily need to be so deadly — though the decisions AI renders could mean trouble for a business, individual, or even society at large. One of the co-authors of this article recently encountered his own AI “trolley moment,” during a stay in an Airbnb-rented house in upstate New Hampshire. Despite amazing preview pictures and positive reviews, the place was poorly maintained and a dump with condemned adjacent houses. The author was going to give the place a low one-star rating and a negative review, to warn others considering a stay.

However, on the second morning of the stay, the host of the house, a sweet and caring elderly woman, knocked on the door, inquiring if the author and his family were comfortable and if they had everything they needed. During the conversation, the host offered to pick up some fresh fruits from a nearby farmers market. She also said she doesn’t have a car, she would walk a mile to a friend’s place, who would then drive her to the market. She also described her hardships over the past two years, as rentals slumped due to Covid and that she is caring for someone sick full time.

Upon learning this, the author elected not to post the negative review. While the initial decision — to write a negative review — was based on facts, the decision not to post the review was purely a subjective human decision. In this case, the trolley problem was concern for the welfare of the elderly homeowner superseding consideration for the comfort of other potential guests.

How would an AI program have handled this situation? Likely not as sympathetically for the homeowner. It would have delivered a fact-based decision without empathy for the human lives involved.

Saturday, November 12, 2022

Loss aversion, the endowment effect, and gain-loss framing shape preferences for noninstrumental information

Litovsky, Y. Loewenstein, G. et al.
PNAS, Vol. 119 | No. 34
August 23, 2022

Abstract

We often talk about interacting with information as we would with a physical good (e.g., “consuming content”) and describe our attachment to personal beliefs in the same way as our attachment to personal belongings (e.g., “holding on to” or “letting go of” our beliefs). But do we in fact value information the way we do objects? The valuation of money and material goods has been extensively researched, but surprisingly few insights from this literature have been applied to the study of information valuation. This paper demonstrates that two fundamental features of how we value money and material goods embodied in Prospect Theory—loss aversion and different risk preferences for gains versus losses—also hold true for information, even when it has no material value. Study 1 establishes loss aversion for noninstrumental information by showing that people are less likely to choose a gamble when the same outcome is framed as a loss (rather than gain) of information. Study 2 shows that people exhibit the endowment effect for noninstrumental information, and so value information more, simply by virtue of “owning” it. Study 3 provides a conceptual replication of the classic “Asian Disease” gain-loss pattern of risk preferences, but with facts instead of human lives, thereby also documenting a gain-loss framing effect for noninstrumental information. These findings represent a critical step in building a theoretical analogy between information and objects, and provide a useful perspective on why we often resist changing (or losing) our beliefs.

Significance

We build on Abelson and Prentice’s conjecture that beliefs are not merely valued as guides to interacting with the world, but as cherished possessions. Extending this idea to information, we show that three key phenomena which characterize the valuation of money and material goods—loss aversion, the endowment effect, and the gain-loss framing effect—also apply to noninstrumental information. We discuss, more generally, how the analogy between noninstrumental information and material goods can help make sense of the complex ways in which people deal with the huge expansion of available information in the digital age.

From the Discussion

Economists have traditionally treated the value of information as derivative of its consequences for decision-making. While prior research on noninstrumental information has shown that this narrow view of information may be incomplete, only a few accounts have attempted to explain intrinsic preferences for information. One such account argues that people seek (or avoid) information inasmuch as doing so helps them maintain their cherished beliefs. Another proposes that people choose which information to seek or avoid by considering how it will impact their actions, affect, and cognition. Yet, outside of the curiosity literature, no existing account of information valuation considers preferences for information that has neither instrumental nor (concrete) hedonic value. By showing that key features of Prospect Theory’s value function also apply to individuals’ valuation of (even noninstrumental) information, the current paper suggests that we may also value information in some of the same fundamental ways that we value physical goods.

Monday, November 7, 2022

Neural processes in antecedent anxiety modulate risk-taking behavior

Nash, K., Leota, J., & Tran, A. (2021). 
Scientific Reports, 11.

Abstract

Though real-world decisions are often made in the shadow of economic uncertainties, work problems, relationship troubles, existential angst, etc., the neural processes involved in this common experience remain poorly understood. Here, we randomly assigned participants (N = 97) to either a poignant experience of forecasted economic anxiety or a no-anxiety control condition. Using electroencephalography (EEG), we then examined how source-localized, anxiety-specific neural activation modulated risky decision making and strategic behavior in the Balloon Analogue Risk Task (BART). Previous research demonstrates opposing effects of anxiety on risk-taking, leading to contrasting predictions. On the one hand, activity in the dorsomedial PFC/anterior cingulate cortex (ACC) and anterior insula, brain regions linked with anxiety and sensitivity to risk, should mediate the effect of economic anxiety on increased risk-averse decision-making. On the other hand, activation in the ventromedial PFC, a brain region important in emotion regulation and subjective valuation in decision-making, should mediate the effect of economic anxiety on increased risky decision-making. Results revealed evidence related to both predictions. Additionally, anxiety-specific activation in the dmPFC/ACC and the anterior insula were associated with disrupted learning across the task. These results shed light on the neurobiology of antecedent anxiety and risk-taking and provide potential insight into understanding how real-world anxieties can impact decision-making processes. 

Discussion

Rarely, in everyday life, must we make a series of decisions as anxious events fit in and out of awareness. Rather, we often face looming anxieties that spill over into the decisions we make. Here, we experimentally induced this real-world experience, in which we examined how antecedent anxiety and the accompanying neural processes modulated decision-making in a risk-taking task. Based on past research demonstrating that anxiety can have diverging effects on risk-taking, we formulated contrasting predictions. An anxious experience should modulate dmPFC/dACC and anterior insula activity, brain regions tightly linked with anxious worry, and this anxiety-specific activation should predict more risk-averse decisions in the BART. Alternatively, anxiety should modulate activation in the vmPFC, a brain region important in emotion regulation and decision-making and this anxiety-specific activation should then predict more risk-seeking decisions in the BART, through disrupted cognitive control or heightened sensitivity to reward.

We found evidence related to both predictions. On the one hand, right anterior insula activation specific to
antecedent anxiety predicted decreased risk-taking. This finding is consistent with considerable research on the neural mechanisms of risk and the limited prior research on incidental anxiety and decision-making. For example, the threat of shock during a decision-making task increased the anterior insula’s coding of negative evaluations and this activation predicted increased rejection rate of risky lottery decisions. For the first time, we extend these prior results to antecedent anxiety. The experience of economic anxiety is a poignant and difficult to regulate event. Presumably, right anterior insula activation caused by the economic anxiety manipulation sustained a more cautious approach to negative outcomes that trickled-down to risk-averse decision-making.