Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Biases. Show all posts
Showing posts with label Biases. Show all posts

Wednesday, October 11, 2023

The Best-Case Heuristic: 4 Studies of Relative Optimism, Best-Case, Worst-Case, & Realistic Predictions in Relationships, Politics, & a Pandemic

Sjåstad, H., & Van Bavel, J. (2023).
Personality and Social Psychology Bulletin, 0(0).
https://doi.org/10.1177/01461672231191360

Abstract

In four experiments covering three different life domains, participants made future predictions in what they considered the most realistic scenario, an optimistic best-case scenario, or a pessimistic worst-case scenario (N = 2,900 Americans). Consistent with a best-case heuristic, participants made “realistic” predictions that were much closer to their best-case scenario than to their worst-case scenario. We found the same best-case asymmetry in health-related predictions during the COVID-19 pandemic, for romantic relationships, and a future presidential election. In a fully between-subject design (Experiment 4), realistic and best-case predictions were practically identical, and they were naturally made faster than the worst-case predictions. At least in the current study domains, the findings suggest that people generate “realistic” predictions by leaning toward their best-case scenario and largely ignoring their worst-case scenario. Although political conservatism was correlated with lower covid-related risk perception and lower support of early public-health interventions, the best-case prediction heuristic was ideologically symmetric.


Here is my summary:

This research examined how people make predictions about the future in different life domains, such as health, relationships, and politics. The researchers found that people tend to make predictions that are closer to their best-case scenario than to their worst-case scenario, even when asked to make a "realistic" prediction. This is known as the best-case heuristic.

The researchers conducted four experiments to test the best-case heuristic. In the first experiment, participants were asked to make predictions about their risk of getting COVID-19, their satisfaction with their romantic relationship in one year, and the outcome of the next presidential election. Participants were asked to make three predictions for each event: a best-case scenario, a worst-case scenario, and a realistic scenario. The results showed that participants' "realistic" predictions were much closer to their best-case predictions than to their worst-case predictions.

The researchers found the same best-case asymmetry in the other three experiments, which covered a variety of life domains, including health, relationships, and politics. The findings suggest that people use a best-case heuristic when making predictions about the future, even in serious and important matters.

The best-case heuristic has several implications for individuals and society. On the one hand, it can help people to maintain a positive outlook on life and to cope with difficult challenges. On the other hand, it can also lead to unrealistic expectations and to a failure to plan for potential problems.

Overall, the research on the best-case heuristic suggests that people's predictions about the future are often biased towards optimism. This is something to be aware of when making important decisions and when planning for the future.

Wednesday, October 4, 2023

Humans’ Bias Blind Spot and Its Societal Significance

Pronin, E., & Hazel, L. (2023).
Current Directions in Psychological Science, 0(0).

Abstract

Human beings have a bias blind spot. We see bias all around us but sometimes not in ourselves. This asymmetry hinders self-knowledge and fuels interpersonal misunderstanding and conflict. It is rooted in cognitive mechanics differentiating self- and social perception as well as in self-esteem motives. It generalizes across social, cognitive, and behavioral biases; begins in childhood; and appears across cultures. People show a bias blind spot in high-stakes contexts, including investing, medicine, human resources, and law. Strategies for addressing the problem are described.

(cut)

Bias-limiting procedures

When it comes to eliminating bias, attempts to overcome it via conscious effort and educational training are not ideal. A different strategy is worth considering, when possible: preventing people’s biases from having a chance to operate in the first place, by limiting their access to biasing information. Examples include conducting auditions behind a screen (discussed earlier) and blind review of journal submissions. If fully blocking access to potentially biasing information is not possible or carries more costs than benefits, another less stringent option is worth considering, that is, controlling when the information is presented so that potentially biasing information comes late, ideally after a tentative judgment is made (e.g., “sequential unmasking”; Dror, 2018; “temporary cloaking”; Kang, 2021).

Because of the BBS, people can be resistant to procedures like this that limit their access to biasing information (see Fig. 3). For example, forensics experts prefer consciously trying to avoid bias over being shielded from even irrelevant biasing information (Kukucka et al., 2017). When high school teachers and ensemble singers were asked to assess blinding procedures (in auditioning and grading), they opposed them more for their own group than for the other group and even more for themselves personally (Pronin et al., 2022). This opposition is consistent with experiments showing that people are unconcerned about the effects of biasing decision processes when it comes to their own decisions (Hansen et al., 2014). In those experiments, participants made judgments using a biasing decision procedure (e.g., judging the quality of paintings only after looking to see if someone famous painted them). They readily acknowledged that the procedure was biased, nonetheless made decisions that were biased by that procedure, and then insisted that their conclusions were objective. This unwarranted confidence is a barrier to the self-imposition of bias-reducing procedures. It suggests the need for adopting procedures like this at the policy level rather than counting on individuals or their organizations to do so.

A different bias-limiting procedure that may induce resistance for these same reasons, and that therefore may also benefit from institutional or policy-level implementation, involves precommitting to decision criteria (e.g., Norton et al., 2004; Uhlmann & Cohen, 2005). For example, the human resources officer who precommits to judging job applicants more on the basis of industry experience versus educational background cannot then change that emphasis after seeing that their favorite candidate has unusually impressive academic credentials. This logic is incorporated, for example, into the system of allocating donor organs in the United States, which has explicit and predetermined criteria for making those allocations in order to avoid the possibility of bias in this high-stakes arena. When decision makers are instructed to provide objective criteria for their decision not before making that decision but rather when providing it—that is, the more typical request made of them—this not only makes bias more likely but also, because of the BBS, may even leave decision makers more confident in their objectivity than if they had not been asked to provide those criteria at all.

Here's my brief summary:

The article discusses the concept of the bias blind spot, which refers to people's tendency to recognize bias in others more readily than in themselves. Studies have consistently shown that people rate themselves as less susceptible to various biases than the average person. The bias blind spot occurs even for well-known biases that people readily accept exist. This blind spot has important societal implications, as it impedes recognition of one's own biases. It also leads to assuming others are more biased than oneself, resulting in decreased trust. Overcoming the bias blind spot is challenging but important for issues from prejudice to politics. It requires actively considering one's own potential biases when making evaluations about oneself or others.

Tuesday, September 26, 2023

I Have a Question for the Famous People Who Have Tried to Apologize

Elizabeth Spiers
The New York Times - Guest Opinion
Originally posted 22 September 23

Here is an excerpt:

As a talk show host, Ms. Barrymore has been lauded in part for her empathy. She is vulnerable, and that makes her guests feel like they can be, too. But even nice people can be self-centered when they’re on the defensive. That’s what happened when people objected to the news that her show would return to production despite the writers’ strike. In a teary, rambling video on Instagram, which was later deleted, she spoke about how hard the situation had been — for her. “I didn’t want to hide behind people. So I won’t. I won’t polish this with bells and whistles and publicists and corporate rhetoric. I’ll just stand out there and accept and be responsible.” (Ms. Barrymore’s awkward, jumbled sentences unwittingly demonstrated how dearly she needs those writers.) Finally, she included a staple of the public figure apology genre: “My intentions have never been in a place to upset or hurt anyone,” she said. “It’s not who I am.”

“This is not who I am” is a frequent refrain from people who are worried that they’re going to be defined by their worst moments. It’s an understandable concern, given the human tendency to pay more attention to negative events. People are always more than the worst thing they’ve done. But it’s also true that the worst things they’ve done are part of who they are.

Somehow, Mila Kunis’s scripted apology was even worse. She and Mr. Kutcher had weathered criticism for writing letters in support of their former “That ’70s Show” co-star Danny Masterson after he was convicted of rape. Facing her public, she spoke in the awkward cadence people have when they haven’t memorized their lines and don’t know where the emphasis should fall. “The letters were not written to question the legitimacy” — pause — “of the judicial system,” she said, “or the validity” — pause — “of the jury’s ruling.” For an actress, it was not a very convincing performance. Mr. Kutcher, who is her husband, was less awkward in his delivery, but his defense was no more convincing. The letters, he explained, were only “intended for the judge to read,” as if the fact that the couple operated behind the scenes made it OK.


Here are my observations about the main theme of this article:

Miller argues that many celebrity apologies fall short because they are not sincere. She says that they often lack the essential elements of a good apology: acknowledging the offense, providing an explanation, expressing remorse, and making amends. Instead, many celebrity apologies are self-serving and aimed at salvaging their public image.

Miller concludes by saying that if celebrities want their apologies to be meaningful, they need to be honest, take responsibility for their actions, and show that they are truly sorry for the harm they have caused.

I would also add that celebrity apologies can be difficult to believe because they often follow a predictable pattern. The celebrity typically issues a statement expressing their regret and apologizing to the people they have hurt. They may also offer a brief explanation for their behavior, but they often avoid taking full responsibility for their actions. And while some celebrities may make amends in some way, such as donating to charity or volunteering their time, many do not.

As a result, many people are skeptical of celebrity apologies. They see them as nothing more than a way for celebrities to save face and get back to their normal lives. This is why it is so important for celebrities to be sincere and genuine when they apologize.

Monday, September 25, 2023

The Young Conservatives Trying to Make Eugenics Respectable Again

Adam Serwer
The Atlantic
Originally posted 15 September 23

Here are two excerpts:

One explanation for the resurgence of scientific racism—what the psychologist Andrew S. Winston defines as the use of data to promote the idea of an “enduring racial hierarchy”—is that some very rich people are underwriting it. Mathias notes that “rich benefactors, some of whose identities are unknown, have funneled hundreds of thousands of dollars into a think tank run by Hanania.” As the biological anthropologist Jonathan Marks tells the science reporter Angela Saini in her book Superior, “There are powerful forces on the right that fund research into studying human differences with the goal of establishing those differences as a basis of inequalities.”

There is no great mystery as to why eugenics has exerted such a magnetic attraction on the wealthy. From god emperors, through the divine right of kings, to social Darwinism, the rich have always sought an uncontestable explanation for why they have so much more money and power than everyone else. In a modern, relatively secular nation whose inequalities of race and class have been shaped by slavery and its legacies, the justifications tend toward the pseudoscience of an unalterable genetic aristocracy with white people at the top and Black people at the bottom.

“The lay concept of race does not correspond to the variation that exists in nature,” the geneticist Joseph L. Graves wrote in The Emperor’s New Clothes: Biological Theories of Race at the Millennium. “Instead, the American concept of race is a social construction, resulting from the unique political and cultural history of the United States.”

Because race is a social reality, genuine disparities among ethnic groups persist in measures such as education and wealth. Contemporary believers in racial pseudoscience insist these disparities must necessarily have a genetic explanation, one that happens to correspond to shifting folk categories of race solidified in the 18th century to justify colonialism and enslavement. They point to the external effects of things like war, poverty, public policy, and discrimination and present them as caused by genetics. For people who have internalized the logic of race, the argument may seem intuitive. But it is just astrology for racists.

(cut)

Race is a sociopolitical category, not a biological one. There is no genetic support for the idea that humans are divided into distinct races with immutable traits shared by others who have the same skin color. Although qualified geneticists have debunked the shoddy arguments of race scientists over and over, the latter maintain their relevance in part by casting substantive objections to their assumptions, methods, and conclusions as liberal censorship. There are few more foolproof ways to get Trump-era conservatives to believe falsehoods than to insist that liberals are suppressing them. Race scientists also understand that most people can evaluate neither the pseudoscience they offer as proof of racial differences nor the actual science that refutes it, and will default to their political sympathies.

Three political developments helped renew this pseudoscience’s appeal. The first was the election of Barack Obama, an emotional blow to those adhering to the concept of racial hierarchy from which they have yet to recover. Then came the rise of Bernie Sanders, whose left-wing populism blamed the greed of the ultra-wealthy for the economic struggles of both the American working class and everyone in between. Both men—one a symbol of racial equality, the other of economic justice—drew broad support within the increasingly liberal white-collar workforce from which the phrenologist billionaires of Big Tech draw their employees. The third was the election of Donald Trump, itself a reaction to Obama and an inspiration to those dreaming of a world where overt bigotry does not carry social consequences.


Here is my brief synopsis:

Young conservatives are often influenced by far-right ideologues who believe in the superiority of the white race and the need to improve the human gene pool.  Serwer argues that the resurgence of interest in eugenics is part of a broader trend on the right towards embracing racist and white supremacist ideas. He also notes that the pseudoscience of race is being used to justify hierarchies and provide an enemy to rail against.

It is important to note that eugenics is a dangerous and discredited ideology. It has been used to justify forced sterilization, genocide, and other atrocities. The resurgence of interest in eugenics is a threat to all people, especially those who are already marginalized and disadvantaged.

Thursday, September 21, 2023

The Myth of the Secret Genius

Brian Klaas
The Garden of Forking Path
Originally posted 30 Nov 22

Here are two excepts: 

A recent research study, involving a collaboration between physicists who model complex systems and an economist, however, has revealed why billionaires are so often mediocre people masquerading as geniuses. Using computer modelling, they developed a fake society in which there is a realistic distribution of talent among competing agents in the simulation. They then applied some pretty simple rules for their model: talent helps, but luck also plays a role.

Then, they tried to see what would happen if they ran and re-ran the simulation over and over.

What did they find? The most talented people in society almost never became extremely rich. As they put it, “the most successful individuals are not the most talented ones and, on the other hand, the most talented individuals are not the most successful ones.”

Why? The answer is simple. If you’ve got a society of, say, 8 billion people, there are literally billions of humans who are in the middle distribution of talent, the largest area of the Bell curve. That means that in a world that is partly defined by random chance, or luck, the odds that someone from the middle levels of talent will end up as the richest person in the society are extremely high.

Look at this first plot, in which the researchers show capital/success (being rich) on the vertical/Y-axis, and talent on the horizontal/X-axis. What’s clear is that society’s richest person is only marginally more talented than average, and there are a lot of people who are extremely talented that are not rich.

Then, they tried to figure out why this was happening. In their simulated world, lucky and unlucky events would affect agents every so often, in a largely random pattern. When they measured the frequency of luck or misfortune for any individual in the simulation, and then plotted it against becoming rich or poor, they found a strong relationship.

(cut)

The authors conclude by stating “Our results highlight the risks of the paradigm that we call “naive meritocracy", which fails to give honors and rewards to the most competent people, because it underestimates the role of randomness among the determinants of success.”

Indeed.


Here is my summary:

The myth of the secret genius: The belief that some people are just born with natural talent and that there is nothing we can do to achieve the same level of success.

The importance of hard work: The vast majority of successful people are not geniuses. They are simply people who have worked hard and persevered in the face of setbacks.

The power of luck: Luck plays a role in everyone's success. Some people are luckier than others, and most people do not factor in luck, as well as other external variables, into their assessment.  This bias is another form of the Fundamental Attribution Error.

The importance of networks: Our networks play a big role in our success. We need to be proactive in building relationships with people who can help us achieve our goals.

Sunday, August 27, 2023

Ontario court rules against Jordan Peterson, upholds social media training order

Canadian Broadcasting Company
Originally posted 23 August 23

An Ontario court ruled against psychologist and media personality Jordan Peterson Wednesday, and upheld a regulatory body's order that he take social media training in the wake of complaints about his controversial online posts and statements.

Last November, Peterson, a professor emeritus with the University of Toronto psychology department who is also an author and media commentator, was ordered by the College of Psychologists of Ontario to undergo a coaching program on professionalism in public statements.

That followed numerous complaints to the governing body of Ontario psychologists, of which Peterson is a member, regarding his online commentary directed at politicians, a plus-sized model, and transgender actor Elliot Page, among other issues. You can read more about those social media posts here.

The college's complaints committee concluded his controversial public statements could amount to professional misconduct and ordered Peterson to pay for a media coaching program — noting failure to comply could mean the loss of his licence to practice psychology in the province.

Peterson filed for a judicial review, arguing his political commentary is not under the college's purview.

Three Ontario Divisional Court judges unanimously dismissed Peterson's application, ruling that the college's decision falls within its mandate to regulate the profession in the public interest and does not affect his freedom of expression.

"The order is not disciplinary and does not prevent Dr. Peterson from expressing himself on controversial topics; it has a minimal impact on his right to freedom of expression," the decision written by Justice Paul Schabas reads, in part.



My take:

Peterson has argued that the order violates his right to free speech. He has also said that the complaints against him were politically motivated. However, the court ruled that the college's order was justified in order to protect the public from harm.

The case of Jordan Peterson is a reminder that psychologists, like other human beings, are not infallible. They are capable of making mistakes and of expressing harmful views. It is important to hold psychologists accountable for their actions, and to ensure that they are held to the highest ethical standards.

In addition to the steps outlined above, there are a number of other things that can be done to mitigate bias in psychology. These include:
  • Increasing diversity in the field of psychology
  • Promoting critical thinking and self-reflection among psychologists
  • Developing more specific ethical guidelines for psychologists' use of social media
  • Holding psychologists accountable for their online behavior

Saturday, August 26, 2023

Can Confirmation Bias Improve Group Learning?

Gabriel, N. and O'Connor, C. (2022)
[Preprint]

Abstract

Confirmation bias has been widely studied for its role in failures of reasoning. Individuals exhibiting confirmation bias fail to engage with information that contradicts their current beliefs, and, as a result, can fail to abandon inaccurate beliefs. But although most investigations of confirmation bias focus on individual learning, human knowledge is typically developed within a social structure. How does the presence of confirmation bias influence learning and the development of consensus within a group? In this paper, we use network models to study this question. We find, perhaps surprisingly, that moderate confirmation bias often improves group learning. This is because confirmation bias leads the group to entertain a wider variety of theories for a longer time, and prevents them from prematurely settling on a suboptimal theory. There is a downside, however, which is that a stronger form of confirmation bias can cause persistent polarization, and hurt the knowledge producing capacity of the community. We discuss implications of these results for epistemic communities, including scientific ones.

Conclusion

We find that confirmation bias, in a more moderate form, improves the epistemic performance of agents in a networked community. This is perhaps surprising given that previous work mostly emphasizes the epistemic harms of confirmation bias. By decreasing the chances that a group pre-emptively settles on a
promising theory or option, confirmation bias can improve the likelihood that the group chooses optimal options in the long run. In this, it can play a similar role to decreased network connectivity or stubbornness (Zollman, 2007, 2010; Wu, 2021). The downside is that more robust confirmation bias, where agents entirely ignore data that is too disconsonant with their current beliefs, can lead to polarization, and harm the epistemic success of a community. Our modeling results thus provide potential support for the arguments of Mercier & Sperber (2017) regarding the benefits of confirmation bias to a group, but also a caution.  Too much confirmation bias does not provide such benefits.

There are several ongoing discussions in philosophy and the social sciences where these results are relevant. Mayo-Wilson et al. (2011) use network models to argue for the independence thesis—that rationality of individual agents and rationality of the groups they form sometimes come apart. I.e., individually rational agents may form groups which are not ideally rational, and rational groups may sometimes consist in individually irrational agents. Our results lend support to this claim. While there is a great deal of evidence suggesting that confirmation bias is not ideal for individual reasoners, our results suggest that it can nonetheless improve group reasoning under the right conditions.


The authors conclude that confirmation bias can have both positive and negative effects on group learning. The key is to find a moderate level of confirmation bias that allows the group to explore a variety of theories without becoming too polarized.

Here are some of the key findings of the paper:
  • Moderate confirmation bias can improve group learning by preventing the group from prematurely settling on a suboptimal theory.
  • Too much confirmation bias can lead to polarization and a decrease in the group's ability to learn.
  • The key to effective group learning is to find a moderate level of confirmation bias.

Friday, August 18, 2023

Evidence for Anchoring Bias During Physician Decision-Making

Ly, D. P., Shekelle, P. G., & Song, Z. (2023).
JAMA Internal Medicine, 183(8), 818.
https://doi.org/10.1001/jamainternmed.2023.2366

Abstract

Introduction

Cognitive biases are hypothesized to influence physician decision-making, but large-scale evidence consistent with their influence is limited. One such bias is anchoring bias, or the focus on a single—often initial—piece of information when making clinical decisions without sufficiently adjusting to later information.

Objective

To examine whether physicians were less likely to test patients with congestive heart failure (CHF) presenting to the emergency department (ED) with shortness of breath (SOB) for pulmonary embolism (PE) when the patient visit reason section, documented in triage before physicians see the patient, mentioned CHF.

Design, Setting, and Participants

In this cross-sectional study of 2011 to 2018 national Veterans Affairs data, patients with CHF presenting with SOB in Veterans Affairs EDs were included in the analysis. Analyses were performed from July 2019 to January 2023.

Conclusions and Relevance

In this cross-sectional study among patients with CHF presenting with SOB, physicians were less likely to test for PE when the patient visit reason that was documented before they saw the patient mentioned CHF. Physicians may anchor on such initial information in decision-making, which in this case was associated with delayed workup and diagnosis of PE.

Here is the conclusion of the paper:

In conclusion, among patients with CHF presenting to the ED with SOB, we find that ED physicians were less likely to test for PE when the initial reason for visit, documented before the physician's evaluation, specifically mentioned CHF. These results are consistent with physicians anchoring on initial information. Presenting physicians with the patient’s general signs and symptoms, rather than specific diagnoses, may mitigate this anchoring. Other interventions include refining knowledge of findings that distinguish between alternative diagnoses for a particular clinical presentation.

Quick snapshot:

Anchoring bias is a cognitive bias that causes us to rely too heavily on the first piece of information we receive when making a decision. This can lead us to make inaccurate or suboptimal decisions, especially when the initial information is not accurate or relevant.

The findings of this study suggest that anchoring bias may be a significant factor in physician decision-making. This could lead to delayed or missed diagnoses, which could have serious consequences for patients.

Wednesday, August 16, 2023

A Federal Judge Asks: Does the Supreme Court Realize How Bad It Smells?

Michael Ponsor
The New York Times: Opinion
Originally posted 14 July 23

What has gone wrong with the Supreme Court’s sense of smell?

I joined the federal bench in 1984, some years before any of the justices currently on the Supreme Court. Throughout my career, I have been bound and guided by a written code of conduct, backed by a committee of colleagues I can call on for advice. In fact, I checked with a member of that committee before writing this essay.

A few times in my nearly 40 years on the bench, complaints have been filed against me. This is not uncommon for a federal judge. So far, none have been found to have merit, but all of these complaints have been processed with respect, and I have paid close attention to them.

The Supreme Court has avoided imposing a formal ethical apparatus on itself like the one that applies to all other federal judges. I understand the general concern, in part. A complaint mechanism could become a political tool to paralyze the court or a playground for gadflies. However, a skillfully drafted code could overcome this problem. Even a nonenforceable code that the justices formally pledged to respect would be an improvement on the current void.

Reasonable people may disagree on this. The more important, uncontroversial point is that if there will not be formal ethical constraints on our Supreme Court — or even if there will be — its justices must have functioning noses. They must keep themselves far from any conduct with a dubious aroma, even if it may not breach a formal rule.

The fact is, when you become a judge, stuff happens. Many years ago, as a fairly new federal magistrate judge, I was chatting about our kids with a local attorney I knew only slightly. As our conversation unfolded, he mentioned that he’d been planning to take his 10-year-old to a Red Sox game that weekend but their plan had fallen through. Would I like to use his tickets?

Saturday, August 5, 2023

Cheap promises: Evidence from loan repayment pledges in an online experiment

Bhanot, S. P. (2017).
Journal of Economic Behavior & 
Organization, 142, 250-261.

Abstract

Across domains, people struggle to follow through on their commitments. This can happen for many reasons, including dishonesty, forgetfulness, or insufficient intrinsic motivation. Social scientists have explored the reasons for persistent failures to follow through, suggesting that eliciting explicit promises can be an effective way to motivate action. This paper presents a field experiment that tests the effect of explicit promises, in the form of “honor pledges,” on loan repayment rates. The experiment was conducted with LendUp, an online lender, and targeted 4,883 first-time borrowers with the firm. Individuals were randomized into four groups, with the following experimental treatments: (1) having no honor pledge to complete (control); (2) signing a given honor pledge; (3) re-typing the same honor pledge as in (2) before signing; and (4) coming up with a personal honor pledge to type and sign. I also randomized whether or not borrowers were reminded of the honor pledge they signed prior to the repayment deadline. The results suggest that the honor pledge treatments had minimal impacts on repayment, and that reminders of the pledges were similarly ineffective. This suggests that borrowers who fail to repay loans do so not because of dishonesty or behavioral biases, but because they suffer from true financial hardship and are simply unable to repay.

Discussion

Literature in experimental economics and psychology often finds impacts of promises and explicit honor pledges on behavior, and in particular on reducing dishonest behavior. However, the results of this field experiment suggest no meaningful effects from an explicit promise (and indeed, a salient promise) on loan repayment behavior in a real-world setting, with money at stake. Furthermore, a self-written honor pledge was no more efficacious than any other, and altering the salience of the honor pledge, both at loan initiation and in reminder emails, had negligible impacts on outcomes. In other words, I find no evidence for the hypotheses that salience, reminders, or personalization strengthen the impact of a promise on behavior.  Indeed, the results of the study suggest that online loan repayment is a domain where such behavioral tools do not have an impact on decisions. This is a significant result, because it provides insights into why borrowers might fail to repay loans; most notably, it suggests that the failure to repay short-term loans may not be a question of dishonest behavior or behavioral biases, but rather an indication of true financial hardship. Simply put, when repayment is not financially possible, framing, reminders, or other interventions utilizing behavioral science are of limited use.

Thursday, August 3, 2023

The persistence of cognitive biases in financial decisions across economic groups

Ruggeri, K., Ashcroft-Jones, S. et al.
Sci Rep 13, 10329 (2023).

Abstract
While economic inequality continues to rise within countries, efforts to address it have been largely ineffective, particularly those involving behavioral approaches. It is often implied but not tested that choice patterns among low-income individuals may be a factor impeding behavioral interventions aimed at improving upward economic mobility. To test this, we assessed rates of ten cognitive biases across nearly 5000 participants from 27 countries. Our analyses were primarily focused on 1458 individuals that were either low-income adults or individuals who grew up in disadvantaged households but had above-average financial well-being as adults, known as positive deviants. Using discrete and complex models, we find evidence of no differences within or between groups or countries. We therefore conclude that choices impeded by cognitive biases alone cannot explain why some individuals do not experience upward economic mobility. Policies must combine both behavioral and structural interventions to improve financial well-being across populations.

From the Discussion section

This study aimed to determine if rates of cognitive biases were different between positive deviants and low-income adults in a way that might explain some elements of what impedes or facilitates upward economic mobility. We anticipated finding small-to-moderate effects between groups indicating positive deviants were less prone to biases involving risk and uncertainty in financial choices. However, across a sample of nearly 5000 participants from 27 countries, of which 1458 were low-income or positive deviants, we find no evidence of any difference in the rates of cognitive biases—minor or otherwise—and no systematic variability to indicate patterns vary globally.

In sum, we find clear evidence that resistance to cognitive biases is not a factor contributing to or impeding upward economic mobility in our sample. Taken along with related work showing that temporal choice anomalies are tied more to economic environment rather than individual financial circumstances, our findings are (unintentionally) a major validation of arguments (especially that of Bertrand, Mullainathan, and Shafir) stating that poorer individuals are not uniquely prone to cognitive biases that alone explain protracted poverty. It also supports arguments that scarcity is a greater driver of decisions, as individuals of different income groups are equally influenced by biases and context-driven cues.

What makes these findings particularly reliable is that multiple possible approaches to analyses had to be considered while working with the data, some of which were considered into extreme detail before selecting the optimal approach. As our measures were effective at eliciting biases on a scale to be expected based on existing research, and as there were relatively low correlations between individual biases (e.g., observing loss aversion in one participant is not necessarily a strong predictor of also observing any other specific bias), we conclude that there is no evidence from our sample to support that biases are directly associated with potentially harming optimal choices uniquely amongst low-income individuals.

Conclusion

We sought to determine if individuals that had overcome low-income childhoods showed significantly different rates of cognitive biases from individuals that remained low-income as adults. We comprehensively reject our initial hypotheses and conclude that outcomes are not tied—at least not exclusively or potentially even meaningfully—to resistance to cognitive biases. Our research does not reject the notion that individual behavior and decision-making may directly relate to upward economic mobility. Instead, we narrowly conclude that biased decision-making does not alone explain a significant proportion of population-level economic inequality. Thus, any attempts to reduce economic inequality must involve both behavioral and structural aspects. Otherwise, similar decisions between disadvantaged individuals may not lead to similar outcomes. Where combined effectively, it will be possible to assess if genuine impact has been made on the financial well-being of individuals and populations.

Sunday, July 23, 2023

How to Use AI Ethically for Ethical Decision-Making

Demaree-Cotton, J., Earp, B. D., & Savulescu, J.
(2022). American Journal of Bioethics, 22(7), 1–3.

Here is an excerpt:

The  kind  of AI  proposed  by  Meier  and  colleagues  (2022) has  the  fascinating  potential to improve the transparency of ethical decision-making, at least if it is used as a decision aid rather than a decision replacement (Savulescu & Maslen 2015). While artificial intelligence cannot itself engage in the human communicative process of justifying its decisions to patients, the AI they describe (unlike “black-box” AI) makes explicit which values and principles are involved and how much weight they are given.

By contrast, the moral principles or values underlying human moral intuition are not always consciously, introspectively accessible (Cushman, Young, and  Hauser  2006).  While humans sometimes have a fuzzy, intuitive sense of some of the factors that are relevant to their moral judgment, we often have strong moral intuitions without being sure of their source,  or  with- out being clear on precisely how strongly different  factors  played  a  role in  generating  the intuitions.  But  if  clinicians  make use  of  the AI  as a  decision  aid, this  could help  them  to transparently and precisely communicate the actual reasons behind their decision.

This is so even if the AI’s recommendation is ultimately rejected. Suppose, for example, that the AI recommends a course of action, with a certain amount of confidence, and it specifies the exact values or  weights it has assigned  to  autonomy versus  beneficence  in coming  to  this conclusion. Evaluating the recommendation made by the AI could help a committee make more explicit the “black box” aspects  of their own reasoning.  For example, the committee might decide that beneficence should actually be  weighted more heavily in this case than the AI suggests. Being able to understand the reason that their decision diverges from that of the AI gives them the opportunity to offer a further justifying reason as to why they think beneficence should be given more weight;  and  this,  in  turn, could improve the  transparency of their recommendation. 

However, the potential for the kind of AI described in the target article to improve the accuracy of moral decision-making may be more limited. This is so for two reasons. Firstly, whether AI can be expected to outperform human decision-making depends in part on the metrics used to train it. In non-ethical domains, superior accuracy can be achieved because the “verdicts” given to the AI in the training phase are not solely the human judgments that the AI is intended to replace or inform. Consider how AI can learn to detect lung cancer from scans at a superior rate to human radiologists after being trained on large datasets and being “told” which scans show cancer and which ones are cancer-free. Importantly, this training includes cases where radiologists  did  not  recognize  cancer  in  the  early  scans  themselves,  but  where  further information verified the correct diagnosis later on (Ardila et al. 2019). Consequently, these AIs are  able  to  detect  patterns  even in  early  scans  that  are  not  apparent  or  easily detectable  by human radiologists, leading to superior accuracy compared to human performance.  

Saturday, July 22, 2023

Generative AI companies must publish transparency reports

A. Narayanan and S. Kapoor
Knight First Amendment Institute
Originally published 26 June 23

Here is an excerpt:

Transparency reports must cover all three types of harms from AI-generated content

There are three main types of harms that may result from model outputs.

First, generative AI tools could be used to harm others, such as by creating non-consensual deepfakes or child sexual exploitation materials. Developers do have policies that prohibit such uses. For example, OpenAI's policies prohibit a long list of uses, including the use of its models to generate unauthorized legal, financial, or medical advice for others. But these policies cannot have real-world impact if they are not enforced, and due to platforms' lack of transparency about enforcement, we have no idea if they are effective. Similar challenges in ensuring platform accountability have also plagued social media in the past; for instance, ProPublica reporters repeatedly found that Facebook failed to fully remove discriminatory ads from its platform despite claiming to have done so.

Sophisticated bad actors might use open-source tools to generate content that harms others, so enforcing use policies can never be a comprehensive solution. In a recent essay, we argued that disinformation is best addressed by focusing on its distribution (e.g., on social media) rather than its generation. Still, some actors will use tools hosted in the cloud either due to convenience or because the most capable models don’t tend to be open-source. For these reasons, transparency is important for cloud-based generative AI.

Second, users may over-rely on AI for factual information, such as legal, financial, or medical advice. Sometimes they are simply unaware of the tendency of current chatbots to frequently generate incorrect information. For example, a user might ask "what are the divorce laws in my state?" and not know that the answer is unreliable. Alternatively, the user might be harmed because they weren’t careful enough to verify the generated information, despite knowing that it might be inaccurate. Research on automation bias shows that people tend to over-rely on automated tools in many scenarios, sometimes making more errors than when not using the tool.

ChatGPT includes a disclaimer that it sometimes generates inaccurate information. But OpenAI has often touted its performance on medical and legal exams. And importantly, the tool is often genuinely useful at medical diagnosis or legal guidance. So, regardless of whether it’s a good idea to do so, people are in fact using it for these purposes. That makes harm reduction important, and transparency is an important first step.

Third, generated content could be intrinsically undesirable. Unlike the previous types, here the harms arise not because of users' malice, carelessness, or lack of awareness of limitations. Rather, intrinsically problematic content is generated even though it wasn’t requested. For example, Lensa's avatar creation app generated sexualized images and nudes when women uploaded their selfies. Defamation is also intrinsically harmful rather than a matter of user responsibility. It is no comfort to the target of defamation to say that the problem would be solved if every user who might encounter a false claim about them were to exercise care to verify it.


Quick summary: 

The call for transparency reports aims to increase accountability and understanding of the inner workings of generative AI models. By disclosing information about the data used to train the models, the companies can address concerns regarding potential biases and ensure the ethical use of their technology.

Transparency reports could include details about the sources and types of data used, the demographics represented in the training data, any data augmentation techniques applied, and potential biases detected or addressed during model development. This information would enable users, policymakers, and researchers to evaluate the capabilities and limitations of the generative AI systems.

Sunday, July 16, 2023

Gender-Affirming Care for Cisgender People

Theodore E. Schall and Jacob D. Moses
Hastings Center Report 53, no. 3 (2023): 15-24.
DOI: 10.1002/hast.1486 

Abstract

Gender-affirming care is almost exclusively discussed in connection with transgender medicine. However, this article argues that such care predominates among cisgender patients, people whose gender identity matches their sex assigned at birth. To advance this argument, we trace historical shifts in transgender medicine since the 1950s to identify central components of "gender-affirming care" that distinguish it from previous therapeutic models, such as "sex reassignment." Next, we sketch two historical cases-reconstructive mammoplasty and testicular implants-to show how cisgender patients offered justifications grounded in authenticity and gender affirmation that closely mirror rationales supporting gender-affirming care for transgender people. The comparison exposes significant disparities in contemporary health policy regarding care for cis and trans patients. We consider two possible objections to the analogy we draw, but ultimately argue that these disparities are rooted in "trans exceptionalism" that produces demonstrable harm.


Here is my summary:

The authors cite several examples of gender-affirming care for cisgender people, such as breast reconstruction following mastectomy, penile implants following testicular cancer, hormone replacement therapy, and hair removal. They argue that these interventions can be just as important for cisgender people's mental and physical health as they are for transgender people.

The authors also note that gender-affirming care for cisgender people is often less scrutinized and less stigmatized than such care for transgender people. Cisgender people do not need special letters of permission from mental health providers to access care whose primary purpose is to affirm their gender identity. And insurance companies are less likely to exclude gender-affirming care for cisgender people from their coverage.

The authors argue that the differences in the conceptualization and treatment of gender-affirming care for cisgender and transgender people reflect broad anti-trans bias in society and health care. They call for a more inclusive view of gender-affirming care that recognizes the needs of all people, regardless of their gender identity.

Final thoughts:
  1. Gender-affirming care can be lifesaving. It can help reduce anxiety, depression, and suicidal thoughts.  Gender-affirming care can be framed as suicide prevention.
  2. Gender-affirming care is not experimental. It has been studied extensively and is safe and effective. See other posts on this site for more comprehensive examples.
  3. All people deserve access to gender-affirming care, regardless of their gender identity. This is basic equality and fairness in terms of access to medical care.

Saturday, July 1, 2023

Inducing anxiety in large language models increases exploration and bias

Coda-Forno, J., Witte, K., et al. (2023).
arXiv preprint arXiv:2304.11111.

Abstract

Large language models are transforming research on machine learning while galvanizing public debates. Understanding not only when these models work well and succeed but also why they fail and misbehave is of great societal relevance. We propose to turn the lens of computational psychiatry, a framework used to computationally describe and modify aberrant behavior, to the outputs produced by these models. We focus on the Generative Pre-Trained Transformer 3.5 and subject it to tasks commonly studied in psychiatry. Our results show that GPT-3.5 responds robustly to a common anxiety questionnaire, producing higher anxiety scores than human subjects. Moreover, GPT-3.5's responses can be predictably changed by using emotion-inducing prompts. Emotion-induction not only influences GPT-3.5's behavior in a cognitive task measuring exploratory decision-making but also influences its behavior in a previously-established task measuring biases such as racism and ableism. Crucially, GPT-3.5 shows a strong increase in biases when prompted with anxiety-inducing text. Thus, it is likely that how prompts are communicated to large language models has a strong influence on their behavior in applied settings. These results progress our understanding of prompt engineering and demonstrate the usefulness of methods taken from computational psychiatry for studying the capable algorithms to which we increasingly delegate authority and autonomy.

From the Discussion section

What do we make of these results? It seems like GPT-3.5 generally performs best in the neutral condition, so a clear recommendation for prompt-engineering is to try and describe a problem as factually and neutrally as possible. However, if one does use emotive language, then our results show that anxiety-inducing scenarios lead to worse performance and substantially more biases. Of course, the neutral conditions asked GPT-3.5 to talk about something it knows, thereby possibly already contextualizing the prompts further in tasks that require knowledge and measure performance. However, that anxiety-inducing prompts can lead to more biased outputs could have huge consequences in applied scenarios. Large language models are, for example, already used in clinical settings and other high-stake contexts. If they produce higher biases in situations when a user speaks more anxiously, then their outputs could actually become dangerous. We have shown one method, which is to run psychiatric studies, that could capture and prevent such biases before they occur.

In the current work, we intended to show the utility of using computational psychiatry to understand foundation models. We observed that GPT-3.5 produced on average higher anxiety scores than human participants. One possible explanation for these results could be that GPT-3.5’s training data, which consists of a lot of text taken from the internet, could have inherently shown such a bias, i.e. containing more anxious than happy statements. Of course, large language models have just become good enough to perform psychological tasks, and whether or not they intelligently perform them is still a matter of ongoing debate.

Friday, June 30, 2023

The psychology of zero-sum beliefs

Davidai, S., Tepper, S.J. 
Nat Rev Psychol (2023). 

Abstract

People often hold zero-sum beliefs (subjective beliefs that, independent of the actual distribution of resources, one party’s gains are inevitably accrued at other parties’ expense) about interpersonal, intergroup and international relations. In this Review, we synthesize social, cognitive, evolutionary and organizational psychology research on zero-sum beliefs. In doing so, we examine when, why and how such beliefs emerge and what their consequences are for individuals, groups and society.  Although zero-sum beliefs have been mostly conceptualized as an individual difference and a generalized mindset, their emergence and expression are sensitive to cognitive, motivational and contextual forces. Specifically, we identify three broad psychological channels that elicit zero-sum beliefs: intrapersonal and situational forces that elicit threat, generate real or imagined resource scarcity, and inhibit deliberation. This systematic study of zero-sum beliefs advances our understanding of how these beliefs arise, how they influence people’s behaviour and, we hope, how they can be mitigated.

From the Summary and Future Directions section

We have suggested that zero-sum beliefs are influenced by threat, a sense of resource scarcity and lack of deliberation. Although each of these three channels can separately lead to zero-sum beliefs, simultaneously activating more than one channel might be especially potent. For instance, focusing on losses (versus gains) is both threatening and heightens a sense of resource scarcity. Consequently, focusing on losses might be especially likely to foster zero-sum beliefs. Similarly, insufficient deliberation on the long-term and dynamic effects of international trade might foster a view of domestic currency as scarce, prompting the belief that trade is zero-sum. Thus, any factor that simultaneously affects the threat that people experience, their perceptions of resource scarcity, and their level of deliberation is more likely to result in zero-sum beliefs, and attenuating zero-sum beliefs requires an exploration of all the different factors that lead to these experiences in the first place. For instance, increasing deliberation reduces zero-sum beliefs about negotiations by increasing people’s accountability, perspective taking or consideration of mutually beneficial issues. Future research could manipulate deliberation in other contexts to examine its causal effect on zero-sum beliefs. Indeed, because people express more moderate beliefs after deliberating policy details, prompting participants to deliberate about social issues (for example, asking them to explain the process by which one group’s outcomes influence another group’s outcomes) might reduce zero-sum beliefs. More generally, research could examine long-term and scalable solutions for reducing zero-sum beliefs, focusing on interventions that simultaneously reduce threat, mitigate views of resource scarcity and increase deliberation.  For instance, as formal training in economics is associated with lower zero-sum beliefs, researchers could examine whether teaching people basic economic principles reduces zero-sum beliefs across various domains. Similarly, because higher socioeconomic status is negatively associated with zero-sum beliefs, creating a sense of abundance might counter the belief that life is zero-sum.

Wednesday, June 21, 2023

3 Strategies for Making Better, More Informed Decisions

Francesca Gina
Harvard Business Review
Originally published 25 May 23

Here is an excerpt:

Think counterfactually about previous decisions you’ve made.

Counterfactual thinking invites you to consider different courses of action you could have taken to gain a better understanding of the factors that influenced your choice. For example, if you missed a big deadline on a work project, you might reflect on how working harder, asking for help, or renegotiating the deadline could have affected the outcome. This reflection can help you recognize which factors played a significant role in your decision-making process — for example, valuing getting the project done on your own versus getting it done on time — and identify changes you might want to make when it comes to future decisions.

The 1998 movie Sliding Doors offers a great example of how counterfactual thinking can help us understand the forces that shape our decisions. The film explores two alternate storylines for the main character, Helen (played by Gwyneth Paltrow), based on whether she catches an upcoming subway train or misses it. While watching both storylines unfold, we gain insight into different factors that influence Helen’s life choices.

Similarly, engaging in counterfactual thinking can help you think through choices you’ve made by helping you expand your focus to consider multiple frames of reference beyond the present outcome. This type of reflection encourages you to take note of different perspectives and reach a more balanced view of your choices. By thinking counterfactually, you can ensure you are looking at existing data in a more unbiased way.

Challenge your assumptions.

You can also fight self-serving biases by actively seeking out information that challenges your beliefs and assumptions. This can be uncomfortable, as it could threaten your identity and worldview, but it’s a key step in developing a more nuanced and informed perspective.

One way to do this is to purposely expose yourself to different perspectives in order to broaden your understanding of an issue. Take Satya Nadella, the CEO of Microsoft. When he assumed the role in 2014, he recognized that the company’s focus on Windows and Office was limiting its growth potential. Not only did the company need a new strategy, he recognized that the culture needed to evolve as well.

In order to expand the company’s horizons, Nadella sought out talent from different backgrounds and industries, who brought with them a diverse range of perspectives. He also encouraged Microsoft employees to experiment and take risks, even if it meant failing along the way. By purposefully exposing himself and his team to different perspectives and new ideas, Nadella was able to transform Microsoft into a more innovative and customer-focused company, with a renewed focus on cloud computing and artificial intelligence.

Tuesday, June 13, 2023

Using the Veil of Ignorance to align AI systems with principles of justice

Weidinger, L. McKee, K.R., et al. (2023).
PNAS, 120(18), e2213709120

Abstract

The philosopher John Rawls proposed the Veil of Ignorance (VoI) as a thought experiment to identify fair principles for governing a society. Here, we apply the VoI to an important governance domain: artificial intelligence (AI). In five incentive-compatible studies (N = 2, 508), including two preregistered protocols, participants choose principles to govern an Artificial Intelligence (AI) assistant from behind the veil: that is, without knowledge of their own relative position in the group. Compared to participants who have this information, we find a consistent preference for a principle that instructs the AI assistant to prioritize the worst-off. Neither risk attitudes nor political preferences adequately explain these choices. Instead, they appear to be driven by elevated concerns about fairness: Without prompting, participants who reason behind the VoI more frequently explain their choice in terms of fairness, compared to those in the Control condition. Moreover, we find initial support for the ability of the VoI to elicit more robust preferences: In the studies presented here, the VoI increases the likelihood of participants continuing to endorse their initial choice in a subsequent round where they know how they will be affected by the AI intervention and have a self-interested motivation to change their mind. These results emerge in both a descriptive and an immersive game. Our findings suggest that the VoI may be a suitable mechanism for selecting distributive principles to govern AI.

Significance

The growing integration of Artificial Intelligence (AI) into society raises a critical question: How can principles be fairly selected to govern these systems? Across five studies, with a total of 2,508 participants, we use the Veil of Ignorance to select principles to align AI systems. Compared to participants who know their position, participants behind the veil more frequently choose, and endorse upon reflection, principles for AI that prioritize the worst-off. This pattern is driven by increased consideration of fairness, rather than by political orientation or attitudes to risk. Our findings suggest that the Veil of Ignorance may be a suitable process for selecting principles to govern real-world applications of AI.

From the Discussion section

What do these findings tell us about the selection of principles for AI in the real world? First, the effects we observe suggest that—even though the VoI was initially proposed as a mechanism to identify principles of justice to govern society—it can be meaningfully applied to the selection of governance principles for AI. Previous studies applied the VoI to the state, such that our results provide an extension of prior findings to the domain of AI. Second, the VoI mechanism demonstrates many of the qualities that we want from a real-world alignment procedure: It is an impartial process that recruits fairness-based reasoning rather than self-serving preferences. It also leads to choices that people continue to endorse across different contexts even where they face a self-interested motivation to change their mind. This is both functionally valuable in that aligning AI to stable preferences requires less frequent updating as preferences change, and morally significant, insofar as we judge stable reflectively endorsed preferences to be more authoritative than their nonreflectively endorsed counterparts. Third, neither principle choice nor subsequent endorsement appear to be particularly affected by political affiliation—indicating that the VoI may be a mechanism to reach agreement even between people with different political beliefs. Lastly, these findings provide some guidance about what the content of principles for AI, selected from behind a VoI, may look like: When situated behind the VoI, the majority of participants instructed the AI assistant to help those who were least advantaged.

Saturday, June 3, 2023

The illusion of the mind–body divide is attenuated in males.

Berent, I. 
Sci Rep 13, 6653 (2023).
https://doi.org/10.1038/s41598-023-33079-1

Abstract

A large literature suggests that people are intuitive Dualists—they tend to perceive the mind as ethereal, distinct from the body. Here, we ask whether Dualism emanates from within the human psyche, guided, in part, by theory of mind (ToM). Past research has shown that males are poorer mind-readers than females. If ToM begets Dualism, then males should exhibit weaker Dualism, and instead, lean towards Physicalism (i.e., they should view bodies and minds alike). Experiments 1–2 show that males indeed perceive the psyche as more embodied—as more likely to emerge in a replica of one’s body, and less likely to persist in its absence (after life). Experiment 3 further shows that males are less inclined towards Empiricism—a putative byproduct of Dualism. A final analysis confirms that males’ ToM scores are lower, and ToM scores further correlate with embodiment intuitions (in Experiments 1–2). These observations (from Western participants) cannot establish universality, but the association of Dualism with ToM suggests its roots are psychological. Thus, the illusory mind–body divide may arise from the very workings of the human mind.

Discussion

People tend to consider the mind as ethereal, distinct from the body. This intuitive Dualist stance has been demonstrated in adults and children, in Western and non-Western participants and its consequences on reasoning are widespread.

Why people are putative Dualists, however, is unclear. In particular, one wonders whether Dualism arises only by cultural transmission, or whether the illusion of the mind–body divide can also emerge naturally, from ToM.

To address this question, here, we investigated whether individual differences in ToM capacities, occurring within the neurotypical population—between males and females—are linked to Dualism. Experiments 1–2 show that this is indeed the case.

Males, in this sample, considered the psyche as more strongly embodied than females: they believed that epistemic states are more likely to emerge in a replica of one’s body (in Experiment 1) and that psychological traits are less likely to persist upon the body’s demise, in the afterlife (in Experiment 2). Experiment 3 further showed that males are also more likely to consider psychological traits as innate—this is expected by past findings, suggesting that Dualism begets Empiricism.

A follow-up analysis has confirmed that these differences in reasoning about bodies and minds are linked to ToM. Not only did males in this sample score lower than females on ToM, but their ToM scores correlated with their Dualist intuitions.

As noted, these results ought to be interpreted with caution, as the gender differences observed here may not hold universally, and it certainly does not speak to the reasoning of any individual person. And indeed, ToM abilities demonstrably depend on multiple factors, including linguistic experience and culture. But inasmuch as females show superior ToM, they ought to lean towards Dualism and Empiricism. Dualism, then, is linked to ToM.

Saturday, May 6, 2023

How Smart People Can Stop Being Miserable

Arthur C. Brooks
The Atlantic
Originally posted 23MAR 23

Here are some excerpts:

“Happiness in intelligent people is the rarest thing I know,” an unnamed character casually remarks in Ernest Hemingway’s novel The Garden of Eden. You might say that this is a corollary of the much more famous “Ignorance is bliss.”

The latter recalls phenomena such as:
  • the Dunning-Kruger effect—in which people lacking skills and knowledge in a particular area innocently underestimate their own incompetence—and 
  • the illusion of explanatory depth, which can prompt autodidacts on social media to excitedly present complex scientific phenomena, thinking they understand them in far greater depth than they really do.
The Hemingway hypothesis, however, is less straightforward. I can think of a lot of unhappy intellectuals, to be sure. But is intelligence per se their problem? Happiness scholars have studied this question, and the answer is—as in so many parts of life—it depends. The gifts you possess can lift you up or pull you down; it all depends on how you use them. Many people see intelligence as a way to get ahead of others. But to get happier, we need to do the opposite.

You might assume that intelligence—whether it be the conventional IQ kind, emotional intelligence, musical talent, or some other dimension along which a person can excel—raises happiness, all else being equal. After all, people with higher cognitive ability should logically have more exciting life opportunities than others. They should also acquire more resources with which to enhance their well-being.

In general, however, there is no correlation between general intelligence and life satisfaction at the individual level. That principle does mask a few wrinkles. In 2022, researchers at Weill Cornell Medicine and Fordham University looked at the association between well-being and various building blocks of neurocognitive ability: memory, processing speed, reasoning, spatial visualization, and vocabulary. The only components of intelligence that they found to be positively related to happiness were spatial visualization, memory, and processing speed—but those relationships were fleeting and age-related.

More interesting, the researchers also found a strongly negative association between happiness and vocabulary. To explain this, they offered a hypothesis: People with a large vocabulary “self-select more challenging environments, and as a result may encounter more daily stressors and reduced positive affect.” In other words, loquacious logophiles might have byzantine lives and find themselves in manifold precarious situations that lower their jouissance. (They talk themselves into misery.)

(cut)

I think there is a clear reason that something as valuable as intelligence, especially manifested in one’s ability to communicate, doesn’t necessarily lead to a higher quality of life.

One of life’s cruelest mysteries is why we are impelled to pursue rewards that bring success, but not happiness. Mother Nature drives us toward the four goals of money, power, pleasure, and prestige with the promise that these rewards will bring happiness. In truth, the correlation might be positive, but the causation is probably reversed: Happier people naturally get these rewards. But seek them for their own sake, for your own gain, and happiness will likely fall. Accordingly, if you aspire to use your cleverness for personal benefit—for the praise and admiration of others, or an advantage in work and dating—woe be unto you.

The smarter you are, the better equipped you should be to understand that well-being comes from faith, family, friendship, and work that serves others. Your intelligence is more likely to bring you happiness if you put it to use by chasing better ways to love and serve others, rather than elbowing others aside and hoarding worldly rewards.

In some ways, you can think of intelligence as a resource just like money or power. We know how to make the latter two sources of joy: Share them with others, and use them as a force for good in the world. To make smarts a fount of happiness, too, we can follow the same guide. Here are a couple of tangible proposals.