Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Decision-making. Show all posts
Showing posts with label Decision-making. Show all posts

Tuesday, February 20, 2024

Understanding Liability Risk from Using Health Care Artificial Intelligence Tools

Mello, M. M., & Guha, N. (2024).
The New England journal of medicine, 390(3), 271–278. https://doi.org/10.1056/NEJMhle2308901

Optimism about the explosive potential of artificial intelligence (AI) to transform medicine is tempered by worry about what it may mean for the clinicians being "augmented." One question is especially problematic because it may chill adoption: when Al contributes to patient injury, who will be held responsible?

Some attorneys counsel health care organizations with dire warnings about liability1 and dauntingly long lists of legal concerns. Unfortunately, liability concern can lead to overly conservative decisions, including reluctance to try new things. Yet, older forms of clinical decision support provided important opportunities to prevent errors and malpractice claims. Given the slow progress in reducing diagnostic errors, not adopting new tools also has consequences and at some point may itself become malpractice. Liability uncertainty also affects Al developers' cost of capital and incentives to develop particular products, thereby influencing which Al innovations become available and at what price.

To help health care organizations and physicians weigh Al-related liability risk against the benefits of adoption, we examine the issues that courts have grappled with in cases involving software error and what makes them so challenging. Because the signals emerging from case law remain somewhat faint, we conducted further analysis of the aspects of Al tools that elevate or mitigate legal risk. Drawing on both analyses, we provide risk-management recommendations, focusing on the uses of Al in direct patient care with a "human in the loop" since the use of fully autonomous systems raises additional issues.

(cut)

The Awkward Adolescence of Software-Related Liability

Legal precedent regarding Al injuries is rare because Al models are new and few personal-injury claims result in written opinions. As this area of law matures, it will confront several challenges.

Challenges in Applying Tort Law Principles to Health Care Artificial Intelligence (AI).

Ordinarily, when a physician uses or recommends a product and an injury to the patient results, well-established rules help courts allocate liability among the physician, product maker, and patient. The liabilities of the physician and product maker are derived from different standards of care, but for both kinds of defendants, plaintiffs must show that the defendant owed them a duty, the defendant breached the applicable standard of care, and the breach caused their injury; plaintiffs must also rebut any suggestion that the injury was so unusual as to be outside the scope of liability.

The article is paywalled, which is not how this should work.

Saturday, February 10, 2024

How to think like a Bayesian

Michael Titelbaum
psyche.co
Originally posted 10 Jan 24

You’re often asked what you believe. Do you believe in God? Do you believe in global warming? Do you believe in life after love? And you’re often told that your beliefs are central to who you are, and what you should do: ‘Do what you believe is right.’

These belief-questions demand all-or-nothing answers. But much of life is more complicated than that. You might not believe in God, but also might not be willing to rule out the existence of a deity. That’s what agnosticism is for.

For many important questions, even three options aren’t enough. Right now, I’m trying to figure out what kinds of colleges my family will be able to afford for my children. My kids’ options will depend on lots of variables: what kinds of schools will they be able to get into? What kinds of schools might be a good fit for them? If we invest our money in various ways, what kinds of return will it earn over the next two, five, or 10 years?

Suppose someone tried to help me solve this problem by saying: ‘Look, it’s really simple. Just tell me, do you believe your oldest daughter will get into the local state school, or do you believe that she won’t?’ I wouldn’t know what to say to that question. I don’t believe that she will get into the school, but I also don’t believe that she won’t. I’m perhaps slightly more confident than 50-50 that she will, but nowhere near certain.

One of the most important conceptual developments of the past few decades is the realisation that belief comes in degrees. We don’t just believe something or not: much of our thinking, and decision-making, is driven by varying levels of confidence. These confidence levels can be measured as probabilities, on a scale from zero to 100 per cent. When I invest the money I’ve saved for my children’s education, it’s an oversimplification to focus on questions like: ‘Do I believe that stocks will outperform bonds over the next decade, or not?’ I can’t possibly know that. But I can try to assign educated probability estimates to each of those possible outcomes, and balance my portfolio in light of those estimates.

(cut)

Key points – How to think like a Bayesian
  1. Embrace the margins. It’s rarely rational to be certain of anything. Don’t confuse the improbable with the impossible. When thinking about extremely rare events, try thinking in odds instead of percentages.
  2. Evidence supports what makes it probable. Evidence supports the hypotheses that make the evidence likely. Increase your confidence in whichever hypothesis makes the evidence you’re seeing most probable.
  3. Attend to all your evidence. Consider all the evidence you possess that might be relevant to a hypothesis. Be sure to take into account how you learned what you learned.
  4. Don’t forget your prior opinions. Your confidence after learning some evidence should depend both on what that evidence supports and on how you saw things before it came in. If a hypothesis is improbable enough, strong evidence in its favour can still leave it unlikely.
  5. Subgroups don’t always reflect the whole. Even if a trend obtains in every subpopulation, it might not hold true for the entire population. Consider how traits are distributed across subgroups as well.

Thursday, February 8, 2024

People's thinking plans adapt to the problem they're trying to solve

Ongchoco, J. D., Knobe, J., & Jara-Ettinger, J. (2024).
Cognition, 243, 105669.

Abstract

Much of our thinking focuses on deciding what to do in situations where the space of possible options is too large to evaluate exhaustively. Previous work has found that people do this by learning the general value of different behaviors, and prioritizing thinking about high-value options in new situations. Is this good-action bias always the best strategy, or can thinking about low-value options sometimes become more beneficial? Can people adapt their thinking accordingly based on the situation? And how do we know what to think about in novel events? Here, we developed a block-puzzle paradigm that enabled us to measure people's thinking plans and compare them to a computational model of rational thought. We used two distinct response methods to explore what people think about—a self-report method, in which we asked people explicitly to report what they thought about, and an implicit response time method, in which we used people's decision-making times to reveal what they thought about. Our results suggest that people can quickly estimate the apparent value of different options and use this to decide what to think about. Critically, we find that people can flexibly prioritize whether to think about high-value options (Experiments 1 and 2) or low-value options (Experiments 3, 4, and 5), depending on the problem. Through computational modeling, we show that these thinking strategies are broadly rational, enabling people to maximize the value of long-term decisions. Our results suggest that thinking plans are flexible: What we think about depends on the structure of the problems we are trying to solve.


Some thoughts:

The study is based on the idea that people have "thinking plans" which are essentially roadmaps that guide our thoughts and actions when we are trying to solve a problem. These thinking plans are not static, but rather can change and adapt depending on the specific problem we are facing.

For example, if we are trying to solve a math problem, our thinking plan might involve breaking the problem down into smaller steps, identifying the relevant information, and applying the appropriate formulas. However, if we are trying to solve a social problem, our thinking plan might involve considering the different perspectives of the people involved, identifying potential solutions, and evaluating the consequences of each solution.

The study used computational modeling to simulate how people would solve different types of problems. The model showed that people's thinking plans were flexible and adapted to the specific problem at hand. The model also showed that these thinking plans were broadly rational, meaning that they helped people to make decisions that were in their best interests.

The findings of the study have important implications for education and other fields that are concerned with human decision-making. The study suggests that it is important to teach people how to think flexibly and adapt their thinking plans to different situations. It also suggests that we should not expect people to always make the "right" decision, as the best course of action will often depend on the specific circumstances.

Saturday, February 3, 2024

How to Navigate the Pitfalls of AI Hype in Health Care

Suran M, Hswen Y.
JAMA.
Published online January 03, 2024.

What is AI snake oil, and how might it hinder progress within the medical field? What are the inherent risks in AI-driven automation for patient care, and how can we ensure the protection of sensitive patient information while maximizing its benefits?

When it comes to using AI in medicine, progress is important—but so is proceeding with caution, says Arvind Narayanan, PhD, a professor of computer science at Princeton University, where he directs the Center for Information Technology Policy.


Here is my summary:
  • AI has the potential to revolutionize healthcare, but it is important to be aware of the hype and potential pitfalls.
  • One of the biggest concerns is bias. AI algorithms can be biased based on the data they are trained on, which can lead to unfair or inaccurate results. For example, an AI algorithm that is trained on data from mostly white patients may be less accurate at diagnosing diseases in black patients.
  • Another concern is privacy. AI algorithms require large amounts of data to work, and this data can be very sensitive. It is important to make sure that patient data is protected and that patients have control over how their data is used.
  • It is also important to remember that AI is not a magic bullet. AI can be a valuable tool, but it is not a replacement for human judgment. Doctors and other healthcare professionals need to be able to critically evaluate the output of AI algorithms and make sure that it is being used in a safe and ethical way.
Overall, the interview is a cautionary tale about the potential dangers of AI in healthcare. It is important to be aware of the risks and to take steps to mitigate them. But it is also important to remember that AI has the potential to do a lot of good in healthcare. If we develop and use AI responsibly, it can help us to improve the quality of care for everyone.

Here are some additional points that were made in the interview:
  • AI can be used to help with a variety of tasks in healthcare, such as diagnosing diseases, developing new treatments, and managing chronic conditions.
  • There are a number of different types of AI, each with its own strengths and weaknesses.
  • It is important to choose the right type of AI for the task at hand.
  • AI should always be used in conjunction with human judgment.

Tuesday, December 19, 2023

Human bias in algorithm design

Morewedge, C.K., Mullainathan, S., Naushan, H.F. et al.
Nat Hum Behav 7, 1822–1824 (2023).

Here is how the article starts:

Algorithms are designed to learn user preferences by observing user behaviour. This causes algorithms to fail to reflect user preferences when psychological biases affect user decision making. For algorithms to enhance social welfare, algorithm design needs to be psychologically informed.Many people believe that algorithms are failing to live up to their prom-ise to reflect user preferences and improve social welfare. The problem is not technological. Modern algorithms are sophisticated and accurate. Training algorithms on unrepresentative samples contributes to the problem, but failures happen even when algorithms are trained on the population. Nor is the problem caused only by the profit motive. For-profit firms design algorithms at a cost to users, but even non-profit organizations and governments fall short.

All algorithms are built on a psychological model of what the user is doing. The fundamental constraint on this model is the narrowness of the measurable variables for algorithms to predict. We suggest that algorithms fail to reflect user preferences and enhance their welfare because algorithms rely on revealed preferences to make predictions. Designers build algorithms with the erroneous assumption that user behaviour (revealed preferences) tells us (1) what users rationally prefer (normative preferences) and (2) what will enhance user welfare. Reliance on this 95-year-old economic model, rather than the more realistic assumption that users exhibit bounded rationality, leads designers to train algorithms on user behaviour. Revealed preferences can identify unknown preferences, but revealed preferences are an incomplete — and at times misleading — measure of the normative preferences and values of users. It is ironic that modern algorithms are built on an outmoded and indefensible commitment to revealed preferences.


Here is my summary.

Human biases can be reflected in algorithms, leading to unintended discriminatory outcomes. The authors argue that algorithms are not simply objective tools, but rather embody the values and assumptions of their creators. They highlight the importance of considering psychological factors when designing algorithms, as human behavior is often influenced by biases. To address this issue, the authors propose a framework for developing psychologically informed algorithms that can better capture user preferences and enhance social welfare. They emphasize the need for a more holistic approach to algorithm design that goes beyond technical considerations and takes into account the human element.

Wednesday, December 6, 2023

People are increasingly following their heart and not the Bible - poll

Ryan Foley
Christian Today
Originally published 2 DEC 23

A new study reveals that less than one-third of Americans believe the Bible should serve as the foundation for determining right and wrong, even as most people express support for traditional moral values.

The fourth installment of the America's Values Study, released by the Cultural Research Center at Arizona Christian University Tuesday, asked respondents for their thoughts on traditional moral values and what they would like to see as "America's foundation for determining right and wrong." The survey is based on responses from 2,275 U.S. adults collected in July 2022.

Overall, when asked to identify what they viewed as the primary determinant of right and wrong in the U.S., a plurality of participants (42%) said: "what you feel in your heart." An additional 29% cited majority rule as their desired method for determining right and wrong, while just 29% expressed a belief that the principles laid out in the Bible should determine the understanding of right and wrong in the U.S. That figure rose to 66% among Spiritually Active, Governance Engaged Conservative Christians.

The only other demographic subgroups where at least a plurality of respondents indicated a desire for the Bible to serve as the determinant of right and wrong in the U.S. were respondents who attend an evangelical church (62%), self-described Republicans (57%), theologically defined born-again Christians (54%), self-identified conservatives (49%), those who are at least 50 years of age (39%), members of all Protestant congregations (39%), self-identified Christians (38%) and those who attend mainline Protestant churches (36%).

By contrast, an outright majority of respondents who do not identify with a particular faith at all (53%), along with half of LGBT respondents (50%), self-described moderates (47%), political independents (47%), Democrats (46%), self-described liberals (46%) and Catholic Church attendees (46%) maintained that "what you feel in your heart" should form the foundation of what Americans view as right and wrong.

Sunday, November 26, 2023

How robots can learn to follow a moral code

Neil Savage
Nature.com
Originally posted 26 OCT 23

Here is an excerpt:

Defining ethics

The ability to fine-tune an AI system’s behaviour to promote certain values has inevitably led to debates on who gets to play the moral arbiter. Vosoughi suggests that his work could be used to allow societies to tune models to their own taste — if a community provides examples of its moral and ethical values, then with these techniques it could develop an LLM more aligned with those values, he says. However, he is well aware of the possibility for the technology to be used for harm. “If it becomes a free for all, then you’d be competing with bad actors trying to use our technology to push antisocial views,” he says.

Precisely what constitutes an antisocial view or unethical behaviour, however, isn’t always easy to define. Although there is widespread agreement about many moral and ethical issues — the idea that your car shouldn’t run someone over is pretty universal — on other topics there is strong disagreement, such as abortion. Even seemingly simple issues, such as the idea that you shouldn’t jump a queue, can be more nuanced than is immediately obvious, says Sydney Levine, a cognitive scientist at the Allen Institute. If a person has already been served at a deli counter but drops their spoon while walking away, most people would agree it’s okay to go back for a new one without waiting in line again, so the rule ‘don’t cut the line’ is too simple.

One potential approach for dealing with differing opinions on moral issues is what Levine calls a moral parliament. “This problem of who gets to decide is not just a problem for AI. It’s a problem for governance of a society,” she says. “We’re looking to ideas from governance to help us think through these AI problems.” Similar to a political assembly or parliament, she suggests representing multiple different views in an AI system. “We can have algorithmic representations of different moral positions,” she says. The system would then attempt to calculate what the likely consensus would be on a given issue, based on a concept from game theory called cooperative bargaining.


Here is my summary:

Autonomous robots will need to be able to make ethical decisions in order to safely and effectively interact with humans and the world around them.

The article proposes a number of ways that robots can be taught to follow a moral code. One approach is to use supervised learning, in which robots are trained on a dataset of moral dilemmas and their corresponding solutions. Another approach is to use reinforcement learning, in which robots are rewarded for making ethical decisions and punished for making unethical decisions.

The article also discusses the challenges of teaching robots to follow a moral code. One challenge is that moral codes are often complex and nuanced, and it can be difficult to define them in a way that can be understood by a robot. Another challenge is that moral codes can vary across cultures, and it is important to develop robots that can adapt to different moral frameworks.

The article concludes by arguing that teaching robots to follow a moral code is an important ethical challenge that we need to address as we develop more sophisticated artificial intelligence systems.

Tuesday, November 21, 2023

Toward Parsimony in Bias Research: A Proposed Common Framework of Belief-Consistent Information Processing for a Set of Biases

Oeberst, A., & Imhoff, R. (2023).
Perspectives on Psychological Science, 0(0).

Abstract

One of the essential insights from psychological research is that people’s information processing is often biased. By now, a number of different biases have been identified and empirically demonstrated. Unfortunately, however, these biases have often been examined in separate lines of research, thereby precluding the recognition of shared principles. Here we argue that several—so far mostly unrelated—biases (e.g., bias blind spot, hostile media bias, egocentric/ethnocentric bias, outcome bias) can be traced back to the combination of a fundamental prior belief and humans’ tendency toward belief-consistent information processing. What varies between different biases is essentially the specific belief that guides information processing. More importantly, we propose that different biases even share the same underlying belief and differ only in the specific outcome of information processing that is assessed (i.e., the dependent variable), thus tapping into different manifestations of the same latent information processing. In other words, we propose for discussion a model that suffices to explain several different biases. We thereby suggest a more parsimonious approach compared with current theoretical explanations of these biases. We also generate novel hypotheses that follow directly from the integrative nature of our perspective.

Here is my summary:

The authors argue that many different biases, such as the bias blind spot, hostile media bias, egocentric/ethnocentric bias, and outcome bias, can be traced back to the combination of a fundamental prior belief and humans' tendency toward belief-consistent information processing.

Belief-consistent information processing is the process of attending to, interpreting, and remembering information in a way that is consistent with one's existing beliefs. This process can lead to biases when it results in people ignoring or downplaying information that is inconsistent with their beliefs, and giving undue weight to information that is consistent with their beliefs.

The authors propose that different biases can be distinguished by the specific belief that guides information processing. For example, the bias blind spot is characterized by the belief that one is less biased than others, while hostile media bias is characterized by the belief that the media is biased against one's own group. However, the authors also argue that different biases may share the same underlying belief, and differ only in the specific outcome of information processing that is assessed. For example, both the bias blind spot and hostile media bias may involve the belief that one is more objective than others, but the bias blind spot is assessed in the context of self-evaluations, while hostile media bias is assessed in the context of evaluations of others.

The authors' framework has several advantages over existing theoretical explanations of biases. First, it provides a more parsimonious explanation for a wide range of biases. Second, it generates novel hypotheses that can be tested empirically. For example, the authors hypothesize that people who are more likely to believe in one bias will also be more likely to believe in other biases. Third, the framework has implications for interventions to reduce biases. For example, the authors suggest that interventions to reduce biases could focus on helping people to become more aware of their own biases and to develop strategies for resisting the tendency toward belief-consistent information processing.

Wednesday, October 18, 2023

Responsible Agency and the Importance of Moral Audience

Jefferson, A., & Sifferd, K. 
Ethic Theory Moral Prac 26, 361–375 (2023).

Abstract

Ecological accounts of responsible agency claim that moral feedback is essential to the reasons-responsiveness of agents. In this paper, we discuss McGeer’s scaffolded reasons-responsiveness account in the light of two concerns. The first is that some agents may be less attuned to feedback from their social environment but are nevertheless morally responsible agents – for example, autistic people. The second is that moral audiences can actually work to undermine reasons-responsiveness if they espouse the wrong values. We argue that McGeer’s account can be modified to handle both problems. Once we understand the specific roles that moral feedback plays for recognizing and acting on moral reasons, we can see that autistics frequently do rely on such feedback, although it often needs to be more explicit. Furthermore, although McGeer is correct to highlight the importance of moral feedback, audience sensitivity is not all that matters to reasons-responsiveness; it needs to be tempered by a consistent application of moral rules. Agents also need to make sure that they choose their moral audiences carefully, paying special attention to receiving feedback from audiences which may be adversely affected by their actions.


Here is my take:

Responsible agency is the ability to act on the right moral reasons, even when it is difficult or costly. Moral audience is the group of people whose moral opinions we care about and respect.

According to the authors, moral audience plays a crucial role in responsible agency in two ways:
  1. It helps us to identify and internalize the right moral reasons. We learn about morality from our moral audience, and we are more likely to act on moral reasons if we know that our audience would approve of our actions.
  2. It provides us with motivation to act on moral reasons. We are more likely to do the right thing if we know that our moral audience will be disappointed in us if we don't.
The authors argue that moral audience is particularly important for responsible agency in novel contexts, where we may not have clear guidance from existing moral rules or norms. In these situations, we need to rely on our moral audience to help us to identify and act on the right moral reasons.

The authors also discuss some of the challenges that can arise when we are trying to identify and act on the right moral reasons. For example, our moral audience may have different moral views than we do, or they may be biased in some way. In these cases, we need to be able to critically evaluate our moral audience's views and make our own judgments about what is right and wrong.

Overall, the article makes a strong case for the importance of moral audience in developing and maintaining responsible agency. It is important to have a group of people whose moral opinions we care about and respect, and to be open to their feedback. This can help us to become more morally responsible agents.

Wednesday, October 11, 2023

The Best-Case Heuristic: 4 Studies of Relative Optimism, Best-Case, Worst-Case, & Realistic Predictions in Relationships, Politics, & a Pandemic

Sjåstad, H., & Van Bavel, J. (2023).
Personality and Social Psychology Bulletin, 0(0).
https://doi.org/10.1177/01461672231191360

Abstract

In four experiments covering three different life domains, participants made future predictions in what they considered the most realistic scenario, an optimistic best-case scenario, or a pessimistic worst-case scenario (N = 2,900 Americans). Consistent with a best-case heuristic, participants made “realistic” predictions that were much closer to their best-case scenario than to their worst-case scenario. We found the same best-case asymmetry in health-related predictions during the COVID-19 pandemic, for romantic relationships, and a future presidential election. In a fully between-subject design (Experiment 4), realistic and best-case predictions were practically identical, and they were naturally made faster than the worst-case predictions. At least in the current study domains, the findings suggest that people generate “realistic” predictions by leaning toward their best-case scenario and largely ignoring their worst-case scenario. Although political conservatism was correlated with lower covid-related risk perception and lower support of early public-health interventions, the best-case prediction heuristic was ideologically symmetric.


Here is my summary:

This research examined how people make predictions about the future in different life domains, such as health, relationships, and politics. The researchers found that people tend to make predictions that are closer to their best-case scenario than to their worst-case scenario, even when asked to make a "realistic" prediction. This is known as the best-case heuristic.

The researchers conducted four experiments to test the best-case heuristic. In the first experiment, participants were asked to make predictions about their risk of getting COVID-19, their satisfaction with their romantic relationship in one year, and the outcome of the next presidential election. Participants were asked to make three predictions for each event: a best-case scenario, a worst-case scenario, and a realistic scenario. The results showed that participants' "realistic" predictions were much closer to their best-case predictions than to their worst-case predictions.

The researchers found the same best-case asymmetry in the other three experiments, which covered a variety of life domains, including health, relationships, and politics. The findings suggest that people use a best-case heuristic when making predictions about the future, even in serious and important matters.

The best-case heuristic has several implications for individuals and society. On the one hand, it can help people to maintain a positive outlook on life and to cope with difficult challenges. On the other hand, it can also lead to unrealistic expectations and to a failure to plan for potential problems.

Overall, the research on the best-case heuristic suggests that people's predictions about the future are often biased towards optimism. This is something to be aware of when making important decisions and when planning for the future.

Friday, September 15, 2023

Older Americans are more vulnerable to prior exposure effects in news evaluation.

Lyons, B. A. (2023). 
Harvard Kennedy School Misinformation Review.

Outline

Older news users may be especially vulnerable to prior exposure effects, whereby news comes to be seen as more accurate over multiple viewings. I test this in re-analyses of three two-wave, nationally representative surveys in the United States (N = 8,730) in which respondents rated a series of mainstream, hyperpartisan, and false political headlines (139,082 observations). I find that prior exposure effects increase with age—being strongest for those in the oldest cohort (60+)—especially for false news. I discuss implications for the design of media literacy programs and policies regarding targeted political advertising aimed at this group.

Essay Summary
  • I used three two-wave, nationally representative surveys in the United States (N = 8,730) in which respondents rated a series of actual mainstream, hyperpartisan, or false political headlines. Respondents saw a sample of headlines in the first wave and all headlines in the second wave, allowing me to determine if prior exposure increases perceived accuracy differentially across age.  
  • I found that the effect of prior exposure to headlines on perceived accuracy increases with age. The effect increases linearly with age, with the strongest effect for those in the oldest age cohort (60+). These age differences were most pronounced for false news.
  • These findings suggest that repeated exposure can help account for the positive relationship between age and sharing false information online. However, the size of this effect also underscores that other factors (e.g., greater motivation to derogate the out-party) may play a larger role. 
The beginning of the Implications Section

Web-tracking and social media trace data paint a concerning portrait of older news users. Older American adults were much more likely to visit dubious news sites in 2016 and 2020 (Guess, Nyhan, et al., 2020; Moore et al., 2023), and were also more likely to be classified as false news “supersharers” on Twitter, a group who shares the vast majority of dubious news on the platform (Grinberg et al., 2019). Likewise, this age group shares about seven times more links to these domains on Facebook than younger news consumers (Guess et al., 2019; Guess et al., 2021). 

Interestingly, however, older adults appear to be no worse, if not better, at identifying false news stories than younger cohorts when asked in surveys (Brashier & Schacter, 2020). Why might older adults identify false news in surveys but fall for it “in the wild?” There are likely multiple factors at play, ranging from social changes across the lifespan (Brashier & Schacter, 2020) to changing orientations to politics (Lyons et al., 2023) to cognitive declines (e.g., in memory) (Brashier & Schacter, 2020). In this paper, I focus on one potential contributor. Specifically, I tested the notion that differential effects of prior exposure to false news helps account for the disjuncture between older Americans’ performance in survey tasks and their behavior in the wild.

A large body of literature has been dedicated to exploring the magnitude and potential boundary conditions of the illusory truth effect (Hassan & Barber, 2021; Henderson et al., 2021; Pillai & Fazio, 2021)—a phenomenon in which false statements or news headlines (De keersmaecker et al., 2020; Pennycook et al., 2018) come to be believed over multiple exposures. Might this effect increase with age? As detailed by Brashier and Schacter (2020), cognitive deficits are often blamed for older news users’ behaviors. This may be because cognitive abilities are strongest in young adulthood and slowly decline beyond that point (Salthouse, 2009), resulting in increasingly effortful cognition (Hess et al., 2016). As this process unfolds, older adults may be more likely to fall back on heuristics when judging the veracity of news items (Brashier & Marsh, 2020). Repetition, the source of the illusory truth effect, is one heuristic that may be relied upon in such a scenario. This is because repeated messages feel easier to process and thus are seen as truer than unfamiliar ones (Unkelbach et al., 2019).

Sunday, September 10, 2023

Seeing and sanctioning structural unfairness

Flores-Robles, G., & Gantman, A. P. (2023, June 28).
PsyArXiv

Abstract

People tend to explain wrongdoing as the result of a bad actor or bad system. In five studies (four U.S. online convenience, one U.S. representative sample), we tested whether the way people understand unfairness affects how they sanction it. In Pilot 1A (N = 40), people interpreted unfair offers in an economic game as the result of a bad actor (vs. unfair rules), unless incentivized (Pilot 1B, N = 40), which, in Study 1 (N = 370), predicted costly punishment of individuals (vs. changing unfair rules). In Studies 2 (N = 500) and 3, (N = 470, representative of age, gender, and ethnicity in the U.S), we found that people paid to change the rules for the final round of the game (vs. punished individuals), when they were randomly assigned a bad system (vs. bad actor) explanation for prior identical unfair offers. Explanations for unfairness affect how people sanction it.

Statement of Relevance

Humans are facing massive problems including economic and social inequality. These problems are often framed in the media, and by friends and experts, as a problem either of individual action (e.g., racist beliefs) or of structures (e.g., discriminatory housing laws). The current research uses a context-free economic game to ask whether these explanations have any effect on what people think should happen next. We find that people tend to explain unfair offers in the game in terms of bad actors (unless incentivized) which is related to punishing individuals over changing the game itself.  When people are told that the unfairness they witnessed was the result of a bad actor, they prefer to punish that actor; when they are told that the same unfair behavior is the result of unfair rules, they prefer to change the rules. Our understanding of the mechanisms of inequality affect how we want to sanction it.

My summary:

The article discusses how people tend to explain wrongdoing as the result of a bad actor or bad system.  In essence, this is a human, decision-making process. The authors conducted five studies to test whether the way people understand unfairness affects how they sanction it. They found that people are more likely to punish individuals for unfair behavior when they believe that the behavior is the result of a bad actor. However, they are more likely to try to change the system (or the rules) when they believe that the behavior is the result of a bad system.

The authors argue that these findings have important implications for ethics, morality, and values. They suggest that we need to be more aware of the way we explain unfairness, because our explanations can influence how we respond to it. How an individual frames the issue is a key to correct possible solutions, as well as biases.  They also suggest that we need to be more critical of the systems that we live in, because these systems can create unfairness.

The article raises a number of ethical, moral, and value-related questions. For example, what is the responsibility of individuals to challenge unfair systems? What is the role of government in addressing structural unfairness? And what are the limits of individual and collective action in addressing unfairness?

The article does not provide easy answers to these questions. However, it does provide a valuable framework for thinking about unfairness and how we can respond to it.

Friday, August 18, 2023

Evidence for Anchoring Bias During Physician Decision-Making

Ly, D. P., Shekelle, P. G., & Song, Z. (2023).
JAMA Internal Medicine, 183(8), 818.
https://doi.org/10.1001/jamainternmed.2023.2366

Abstract

Introduction

Cognitive biases are hypothesized to influence physician decision-making, but large-scale evidence consistent with their influence is limited. One such bias is anchoring bias, or the focus on a single—often initial—piece of information when making clinical decisions without sufficiently adjusting to later information.

Objective

To examine whether physicians were less likely to test patients with congestive heart failure (CHF) presenting to the emergency department (ED) with shortness of breath (SOB) for pulmonary embolism (PE) when the patient visit reason section, documented in triage before physicians see the patient, mentioned CHF.

Design, Setting, and Participants

In this cross-sectional study of 2011 to 2018 national Veterans Affairs data, patients with CHF presenting with SOB in Veterans Affairs EDs were included in the analysis. Analyses were performed from July 2019 to January 2023.

Conclusions and Relevance

In this cross-sectional study among patients with CHF presenting with SOB, physicians were less likely to test for PE when the patient visit reason that was documented before they saw the patient mentioned CHF. Physicians may anchor on such initial information in decision-making, which in this case was associated with delayed workup and diagnosis of PE.

Here is the conclusion of the paper:

In conclusion, among patients with CHF presenting to the ED with SOB, we find that ED physicians were less likely to test for PE when the initial reason for visit, documented before the physician's evaluation, specifically mentioned CHF. These results are consistent with physicians anchoring on initial information. Presenting physicians with the patient’s general signs and symptoms, rather than specific diagnoses, may mitigate this anchoring. Other interventions include refining knowledge of findings that distinguish between alternative diagnoses for a particular clinical presentation.

Quick snapshot:

Anchoring bias is a cognitive bias that causes us to rely too heavily on the first piece of information we receive when making a decision. This can lead us to make inaccurate or suboptimal decisions, especially when the initial information is not accurate or relevant.

The findings of this study suggest that anchoring bias may be a significant factor in physician decision-making. This could lead to delayed or missed diagnoses, which could have serious consequences for patients.

Wednesday, August 16, 2023

A Federal Judge Asks: Does the Supreme Court Realize How Bad It Smells?

Michael Ponsor
The New York Times: Opinion
Originally posted 14 July 23

What has gone wrong with the Supreme Court’s sense of smell?

I joined the federal bench in 1984, some years before any of the justices currently on the Supreme Court. Throughout my career, I have been bound and guided by a written code of conduct, backed by a committee of colleagues I can call on for advice. In fact, I checked with a member of that committee before writing this essay.

A few times in my nearly 40 years on the bench, complaints have been filed against me. This is not uncommon for a federal judge. So far, none have been found to have merit, but all of these complaints have been processed with respect, and I have paid close attention to them.

The Supreme Court has avoided imposing a formal ethical apparatus on itself like the one that applies to all other federal judges. I understand the general concern, in part. A complaint mechanism could become a political tool to paralyze the court or a playground for gadflies. However, a skillfully drafted code could overcome this problem. Even a nonenforceable code that the justices formally pledged to respect would be an improvement on the current void.

Reasonable people may disagree on this. The more important, uncontroversial point is that if there will not be formal ethical constraints on our Supreme Court — or even if there will be — its justices must have functioning noses. They must keep themselves far from any conduct with a dubious aroma, even if it may not breach a formal rule.

The fact is, when you become a judge, stuff happens. Many years ago, as a fairly new federal magistrate judge, I was chatting about our kids with a local attorney I knew only slightly. As our conversation unfolded, he mentioned that he’d been planning to take his 10-year-old to a Red Sox game that weekend but their plan had fallen through. Would I like to use his tickets?

Sunday, August 13, 2023

Beyond killing one to save five: Sensitivity to ratio and probability in moral judgment

Ryazanov, A.A., Wang, S.T, et al. (2023).
Journal of Experimental Social Psychology
Volume 108, September 2023, 104499

Abstract

A great deal of current research on moral judgments centers on moral dilemmas concerning tradeoffs between one and five lives. Whether one considers killing one innocent person to save five others to be morally required or impermissible has been taken to determine whether one is appealing to consequentialist or non-consequentialist reasoning. But this focus on tradeoffs between one and five may obscure more nuanced commitments involved in moral decision-making that are revealed when the numbers and ratio of lives to be traded off are varied, and when the probabilities of each outcome occurring are less than certain. Four studies examine participants' reactions to scenarios that diverge in these ways from the standard ones. Study 1 examines the extent to which people are sensitive to the ratio of lives saved to lives ended by a particular action. Study 2 verifies that the ratio rather than the difference between the two values is operative. Study 3 examines whether participants treat probabilistic harm to some as equivalent to certainly harming fewer, holding expected ratio constant. Study 4 explores an analogous issue regarding the sensitivity of probabilistic saving. Participants are remarkably sensitive to expected ratio for probabilistic harms while deviating from expected value for probabilistic saving. Collectively, the studies provide evidence that people's moral judgments are consistent with the principle of threshold deontology.

General discussion

Collectively, our studies show that people are sensitive to expected ratio in moral dilemmas, and that they show this sensitivity across a range of probabilities. The particular kind of sensitivity to expected value participants display is consistent with the view that people's moral judgments are based on one single principle of threshold deontology. If one examines only participants' reactions to a single dilemma with a given ratio, one might naturally tend to sort participants' judgments into consequentialists (the ones who condone killing to save others) or non-consequentialists (the ones who do not). But this can be misleading, as is shown by the result that the number of participants who make judgments consistent with consequentialism in a scenario with ratio of 5:1 decreases when the ratio decreases (as if a larger number of people endorse deontological principles under this lower ratio). The fact that participants make some judgments that are consistent with consequentialism does not entail that these judgments are expressive of a generally consequentialist moral theory. When the larger set of judgments is taken into account, the only theory with which they are consistent is threshold deontology. On this theory, there is a general deontological constraint against killing, but this constraint is overridden when the consequences of inaction are bad enough. The variability across participants suggests that participants have different thresholds of the ratio at which the consequences count as “bad enough” for switching from supporting inaction to supporting action. This is consistent with the wide literature showing that participants' judgments can shift within the same ratio, depending on, for example, how the death of the one is caused.


My summary:

This research provides new insights into how people make moral judgments. It suggests that people are not simply weighing the number of lives saved against the number of lives lost, but that they are also taking into account the ratio of lives saved to lives lost and the probability of each outcome occurring. This research has important implications for our understanding of moral decision-making and for the development of moral education programs.

Tuesday, August 8, 2023

Predictors and consequences of intellectual humility

Porter, T., Elnakouri, A., Meyers, E.A. et al. 
Nat Rev Psychol 1, 524–536 (2022).

Abstract

In a time of societal acrimony, psychological scientists have turned to a possible antidote — intellectual humility. Interest in intellectual humility comes from diverse research areas, including researchers studying leadership and organizational behaviour, personality science, positive psychology, judgement and decision-making, education, culture, and intergroup and interpersonal relationships. In this Review, we synthesize empirical approaches to the study of intellectual humility. We critically examine diverse approaches to defining and measuring intellectual humility and identify the common element: a meta-cognitive ability to recognize the limitations of one’s beliefs and knowledge. After reviewing the validity of different measurement approaches, we highlight factors that influence intellectual humility, from relationship security to social coordination. Furthermore, we review empirical evidence concerning the benefits and drawbacks of intellectual humility for personal decision-making, interpersonal relationships, scientific enterprise and society writ large. We conclude by outlining initial attempts to boost intellectual humility, foreshadowing possible scalable interventions that can turn intellectual humility into a core interpersonal, institutional and cultural value.

Importance of intellectual humility

The willingness to recognize the limits of one’s knowledge and fallibility can confer societal and individual benefits, if expressed in the right moment and to the proper extent. This insight echoes the philosophical roots of intellectual humility as a virtue. State and trait intellectual humility have been associated with a range of cognitive, social and personality variables (Table 2). At the societal level, intellectual humility can promote societal cohesion by reducing group polarization and encouraging harmonious intergroup relationships. At the individual level, intellectual humility can have important consequences for wellbeing, decision-making and academic learning.

Notably, empirical research has provided little evidence regarding the generalizability of the benefits or drawbacks of intellectual humility beyond the unique contexts of WEIRD (Western, educated, industrialized, rich and democratic) societies. With this caveat, below is an initial set of findings concerning the implications of possessing high levels of intellectual humility. Unless otherwise specified, the evidence below concerns trait-level intellectual humility. After reviewing these benefits, we consider attempts to improve an individual’s intellectual humility and confer associated benefits.

(cut)

Individual benefits

Intellectual humility might also have direct consequences for individuals’ wellbeing. People who reason about social conflicts in an intellectually humbler manner and consider others’ perspectives (components of wise reasoning) are more likely to report higher levels of life satisfaction and less negative affect compared to people who do not. Leaders who are higher in intellectual humility are also higher in emotional intelligence and receive higher satisfaction ratings from their followers, which suggests that intellectual humility could benefit professional life. Nonetheless, intellectual humility is not associated with personal wellbeing in all contexts: religious leaders who see their religious beliefs as fallible have lower wellbeing relative to leaders who are less intellectually humble in their beliefs.

Intellectual humility might also help people to make well informed decisions. Intellectually humbler people are better able to differentiate between strong and weak arguments, even if those arguments go against their initial beliefs9. Intellectual humility might also protect against memory distortions. Intellectually humbler people are less likely to claim falsely that they have seen certain statements before116. Likewise, intellectually humbler people are more likely to scrutinize misinformation and are more likely to intend to receive the COVID-19 vaccine.

Lastly, intellectual humility is positively associated with knowledge acquisition, learning and educational achievement. Intellectually humbler people are more motivated to learn and more knowledgeable about general facts. Likewise, intellectually humbler high school and university students expend greater effort when learning difficult material, are more receptive to assignment feedback and earn higher grades.

Despite evidence of individual benefits associated with intellectual humility, much of this work is correlational. Thus, associations could be the product of confounding factors such as agreeableness, intelligence or general virtuousness. Longitudinal or experimental studies are needed to address the question of whether and under what circumstances intellectual humility promotes individual benefits. Notably, philosophical theorizing about the situation-specific virtuousness of the construct suggests that high levels of intellectual humility are unlikely to benefit all people in all situations.


What is intellectual humility? Intellectual humility is the ability to recognize the limits of one's knowledge and to be open to new information and perspectives.

Predictors of intellectual humility: There are a number of factors that can predict intellectual humility, including:
  • Personality traits: People who are high in openness to experience and agreeableness are more likely to be intellectually humble.
  • Cognitive abilities: People who are better at thinking critically and evaluating evidence are also more likely to be intellectually humble.
  • Cultural factors: People who live in cultures that value open-mindedness and tolerance are more likely to be intellectually humble.
Consequences of intellectual humility: Intellectual humility has a number of positive consequences, including:
  • Better decision-making: Intellectually humble people are more likely to make better decisions because they are more open to new information and perspectives.
  • Enhanced learning: Intellectually humble people are more likely to learn from their mistakes and to grow as individuals.
  • Stronger relationships: Intellectually humble people are more likely to have strong relationships because they are more willing to listen to others and to consider their perspectives.

Overall, intellectual humility is a valuable trait that can lead to a number of positive outcomes.

Thursday, August 3, 2023

The persistence of cognitive biases in financial decisions across economic groups

Ruggeri, K., Ashcroft-Jones, S. et al.
Sci Rep 13, 10329 (2023).

Abstract
While economic inequality continues to rise within countries, efforts to address it have been largely ineffective, particularly those involving behavioral approaches. It is often implied but not tested that choice patterns among low-income individuals may be a factor impeding behavioral interventions aimed at improving upward economic mobility. To test this, we assessed rates of ten cognitive biases across nearly 5000 participants from 27 countries. Our analyses were primarily focused on 1458 individuals that were either low-income adults or individuals who grew up in disadvantaged households but had above-average financial well-being as adults, known as positive deviants. Using discrete and complex models, we find evidence of no differences within or between groups or countries. We therefore conclude that choices impeded by cognitive biases alone cannot explain why some individuals do not experience upward economic mobility. Policies must combine both behavioral and structural interventions to improve financial well-being across populations.

From the Discussion section

This study aimed to determine if rates of cognitive biases were different between positive deviants and low-income adults in a way that might explain some elements of what impedes or facilitates upward economic mobility. We anticipated finding small-to-moderate effects between groups indicating positive deviants were less prone to biases involving risk and uncertainty in financial choices. However, across a sample of nearly 5000 participants from 27 countries, of which 1458 were low-income or positive deviants, we find no evidence of any difference in the rates of cognitive biases—minor or otherwise—and no systematic variability to indicate patterns vary globally.

In sum, we find clear evidence that resistance to cognitive biases is not a factor contributing to or impeding upward economic mobility in our sample. Taken along with related work showing that temporal choice anomalies are tied more to economic environment rather than individual financial circumstances, our findings are (unintentionally) a major validation of arguments (especially that of Bertrand, Mullainathan, and Shafir) stating that poorer individuals are not uniquely prone to cognitive biases that alone explain protracted poverty. It also supports arguments that scarcity is a greater driver of decisions, as individuals of different income groups are equally influenced by biases and context-driven cues.

What makes these findings particularly reliable is that multiple possible approaches to analyses had to be considered while working with the data, some of which were considered into extreme detail before selecting the optimal approach. As our measures were effective at eliciting biases on a scale to be expected based on existing research, and as there were relatively low correlations between individual biases (e.g., observing loss aversion in one participant is not necessarily a strong predictor of also observing any other specific bias), we conclude that there is no evidence from our sample to support that biases are directly associated with potentially harming optimal choices uniquely amongst low-income individuals.

Conclusion

We sought to determine if individuals that had overcome low-income childhoods showed significantly different rates of cognitive biases from individuals that remained low-income as adults. We comprehensively reject our initial hypotheses and conclude that outcomes are not tied—at least not exclusively or potentially even meaningfully—to resistance to cognitive biases. Our research does not reject the notion that individual behavior and decision-making may directly relate to upward economic mobility. Instead, we narrowly conclude that biased decision-making does not alone explain a significant proportion of population-level economic inequality. Thus, any attempts to reduce economic inequality must involve both behavioral and structural aspects. Otherwise, similar decisions between disadvantaged individuals may not lead to similar outcomes. Where combined effectively, it will be possible to assess if genuine impact has been made on the financial well-being of individuals and populations.

Wednesday, July 26, 2023

Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions

Krügel, S., Ostermaier, A. & Uhl, M.
Philos. Technol. 35, 17 (2022).

Abstract

Departing from the claim that AI needs to be trustworthy, we find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data and when they learn information about it that warrants distrust. We conducted online experiments where the subjects took the role of decision-makers who received advice from an algorithm on how to deal with an ethical dilemma. We manipulated the information about the algorithm and studied its influence. Our findings suggest that AI is overtrusted rather than distrusted. We suggest digital literacy as a potential remedy to ensure the responsible use of AI.

Summary

Background: Artificial intelligence (AI) is increasingly being used to make ethical decisions. However, there is a concern that AI-powered advisors may not be trustworthy, due to factors such as bias and opacity.

Research question: The authors of this article investigated whether humans trust AI-powered advisors for ethical decisions, even when they know that the advisor is untrustworthy.

Methods: The authors conducted a series of experiments in which participants were asked to make ethical decisions with the help of an AI advisor. The advisor was either trustworthy or untrustworthy, and the participants were aware of this.

Results: The authors found that participants were more likely to trust the AI advisor, even when they knew that it was untrustworthy. This was especially true when the advisor was able to provide a convincing justification for its advice.

Conclusions: The authors concluded that humans are susceptible to "zombie trust" in AI-powered advisors. This means that we may trust AI advisors even when we know that they are untrustworthy. This is a concerning finding, as it could lead us to make bad decisions based on the advice of untrustworthy AI advisors.  By contrast, decision-makers do disregard advice from a human convicted criminal.

The article also discusses the implications of these findings for the development and use of AI-powered advisors. The authors suggest that it is important to make AI advisors more transparent and accountable, in order to reduce the risk of zombie trust. They also suggest that we need to educate people about the potential for AI advisors to be untrustworthy.

Sunday, July 23, 2023

How to Use AI Ethically for Ethical Decision-Making

Demaree-Cotton, J., Earp, B. D., & Savulescu, J.
(2022). American Journal of Bioethics, 22(7), 1–3.

Here is an excerpt:

The  kind  of AI  proposed  by  Meier  and  colleagues  (2022) has  the  fascinating  potential to improve the transparency of ethical decision-making, at least if it is used as a decision aid rather than a decision replacement (Savulescu & Maslen 2015). While artificial intelligence cannot itself engage in the human communicative process of justifying its decisions to patients, the AI they describe (unlike “black-box” AI) makes explicit which values and principles are involved and how much weight they are given.

By contrast, the moral principles or values underlying human moral intuition are not always consciously, introspectively accessible (Cushman, Young, and  Hauser  2006).  While humans sometimes have a fuzzy, intuitive sense of some of the factors that are relevant to their moral judgment, we often have strong moral intuitions without being sure of their source,  or  with- out being clear on precisely how strongly different  factors  played  a  role in  generating  the intuitions.  But  if  clinicians  make use  of  the AI  as a  decision  aid, this  could help  them  to transparently and precisely communicate the actual reasons behind their decision.

This is so even if the AI’s recommendation is ultimately rejected. Suppose, for example, that the AI recommends a course of action, with a certain amount of confidence, and it specifies the exact values or  weights it has assigned  to  autonomy versus  beneficence  in coming  to  this conclusion. Evaluating the recommendation made by the AI could help a committee make more explicit the “black box” aspects  of their own reasoning.  For example, the committee might decide that beneficence should actually be  weighted more heavily in this case than the AI suggests. Being able to understand the reason that their decision diverges from that of the AI gives them the opportunity to offer a further justifying reason as to why they think beneficence should be given more weight;  and  this,  in  turn, could improve the  transparency of their recommendation. 

However, the potential for the kind of AI described in the target article to improve the accuracy of moral decision-making may be more limited. This is so for two reasons. Firstly, whether AI can be expected to outperform human decision-making depends in part on the metrics used to train it. In non-ethical domains, superior accuracy can be achieved because the “verdicts” given to the AI in the training phase are not solely the human judgments that the AI is intended to replace or inform. Consider how AI can learn to detect lung cancer from scans at a superior rate to human radiologists after being trained on large datasets and being “told” which scans show cancer and which ones are cancer-free. Importantly, this training includes cases where radiologists  did  not  recognize  cancer  in  the  early  scans  themselves,  but  where  further information verified the correct diagnosis later on (Ardila et al. 2019). Consequently, these AIs are  able  to  detect  patterns  even in  early  scans  that  are  not  apparent  or  easily detectable  by human radiologists, leading to superior accuracy compared to human performance.  

Wednesday, June 21, 2023

3 Strategies for Making Better, More Informed Decisions

Francesca Gina
Harvard Business Review
Originally published 25 May 23

Here is an excerpt:

Think counterfactually about previous decisions you’ve made.

Counterfactual thinking invites you to consider different courses of action you could have taken to gain a better understanding of the factors that influenced your choice. For example, if you missed a big deadline on a work project, you might reflect on how working harder, asking for help, or renegotiating the deadline could have affected the outcome. This reflection can help you recognize which factors played a significant role in your decision-making process — for example, valuing getting the project done on your own versus getting it done on time — and identify changes you might want to make when it comes to future decisions.

The 1998 movie Sliding Doors offers a great example of how counterfactual thinking can help us understand the forces that shape our decisions. The film explores two alternate storylines for the main character, Helen (played by Gwyneth Paltrow), based on whether she catches an upcoming subway train or misses it. While watching both storylines unfold, we gain insight into different factors that influence Helen’s life choices.

Similarly, engaging in counterfactual thinking can help you think through choices you’ve made by helping you expand your focus to consider multiple frames of reference beyond the present outcome. This type of reflection encourages you to take note of different perspectives and reach a more balanced view of your choices. By thinking counterfactually, you can ensure you are looking at existing data in a more unbiased way.

Challenge your assumptions.

You can also fight self-serving biases by actively seeking out information that challenges your beliefs and assumptions. This can be uncomfortable, as it could threaten your identity and worldview, but it’s a key step in developing a more nuanced and informed perspective.

One way to do this is to purposely expose yourself to different perspectives in order to broaden your understanding of an issue. Take Satya Nadella, the CEO of Microsoft. When he assumed the role in 2014, he recognized that the company’s focus on Windows and Office was limiting its growth potential. Not only did the company need a new strategy, he recognized that the culture needed to evolve as well.

In order to expand the company’s horizons, Nadella sought out talent from different backgrounds and industries, who brought with them a diverse range of perspectives. He also encouraged Microsoft employees to experiment and take risks, even if it meant failing along the way. By purposefully exposing himself and his team to different perspectives and new ideas, Nadella was able to transform Microsoft into a more innovative and customer-focused company, with a renewed focus on cloud computing and artificial intelligence.