Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, October 31, 2023

Which Humans?

Atari, M., Xue, M. J.et al.
(2023, September 22).


Large language models (LLMs) have recently made vast advances in both generating and analyzing textual data. Technical reports often compare LLMs’ outputs with “human” performance on various tests. Here, we ask, “Which humans?” Much of the existing literature largely ignores the fact that humans are a cultural species with substantial psychological diversity around the globe that is not fully captured by the textual data on which current LLMs have been trained. We show that LLMs’ responses to psychological measures are an outlier compared with large-scale cross-cultural data, and that their performance on cognitive psychological tasks most resembles that of people from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies but declines rapidly as we move away from these populations (r = -.70). Ignoring cross-cultural diversity in both human and machine psychology raises numerous scientific and ethical issues. We close by discussing ways to mitigate the WEIRD bias in future generations of generative language models.

My summary:

The authors argue that much of the existing literature on LLMs largely ignores the fact that humans are a cultural species with substantial psychological diversity around the globe. This diversity is not fully captured by the textual data on which current LLMs have been trained.

For example, LLMs are often evaluated on their ability to complete tasks such as answering trivia questions, generating creative text formats, and translating languages. However, these tasks are all biased towards the cultural context of the data on which the LLMs were trained. This means that LLMs may perform well on these tasks for people from certain cultures, but poorly for people from other cultures.

Atari and his co-authors argue that it is important to be aware of this bias when interpreting the results of LLM evaluations. They also call for more research on the performance of LLMs across different cultures and demographics.

One specific example they give is the use of LLMs to generate creative text formats, such as poems and code. They argue that LLMs that are trained on a dataset of text from English-speaking countries are likely to generate creative text that is more culturally relevant to those countries. This could lead to bias and discrimination against people from other cultures.

Atari and his co-authors conclude by calling for more research on the following questions:
  • How do LLMs perform on different tasks across different cultures and demographics?
  • How can we develop LLMs that are less biased towards the cultural context of their training data?
  • How can we ensure that LLMs are used in a way that is fair and equitable for all people?

Monday, October 30, 2023

The Mental Health Crisis Among Doctors Is a Problem for Patients

Keren Landman
Originally posted 25 OCT 23

Here is an excerpt:

What’s causing such high levels of mental distress among doctors?

Physicians have high rates of mental distress — and they’re only getting higher. One 2023 survey found six out of 10 doctors often had feelings of burnout, compared to four out of 10 pre-pandemic. In a separate 2023 study, nearly a quarter of doctors said they were depressed.

Physicians die by suicide at rates higher than the general population, with women’s risk twice as high as men’s. In a 2022 survey, one in 10 doctors said they’d thought about or attempted suicide.

Not all doctors are at equal risk: Primary care providers — like emergency medicine, internal medicine, and pediatrics practitioners — are most likely to say they’re burned out, and female physicians experience burnout at higher rates than male physicians.

(It’s worth noting that other health care professionals — perhaps most prominently nurses — also face high levels of mental distress. But because nurses are more frequently unionized than doctors and because their professional culture isn’t the same as doctor culture, the causes and solutions are also somewhat different.)

Here is my summary:

The article discusses the mental health crisis among doctors and its implications for patients. It notes that doctors are at a higher risk of suicide than any other profession, and that they also experience high rates of burnout and depression.

The mental health crisis among doctors is a problem for patients because it can lead to impaired judgment, medical errors, and reduced quality of care. Additionally, the stigma associated with mental illness can prevent doctors from seeking the help they need, which can further exacerbate the problem.

The article concludes by calling for more attention to the mental health of doctors and for more resources to be made available to help them.

I treat a number of physicians in my practice.

Sunday, October 29, 2023

We Can't Compete With AI Girlfriends

Freya India
Originally published 14 September 23

Here isn an excerpt:

Of course most people are talking about what this means for men, given they make up the vast majority of users. Many worry about a worsening loneliness crisis, a further decline in sex rates, and ultimately the emergence of “a new generation of incels” who depend on and even verbally abuse their virtual girlfriends. Which is all very concerning. But I wonder, if AI girlfriends really do become as pervasive as online porn, what this will mean for girls and young women? Who feel they need to compete with this?

Most obvious to me is the ramping up of already unrealistic beauty standards. I know conservatives often get frustrated with feminists calling everything unattainable, and I agree they can go too far — but still, it’s hard to deny that the pressure to look perfect today is unlike anything we’ve ever seen before. And I don’t think that’s necessarily pressure from men but I do very much think it’s pressure from a network of profit-driven industries that take what men like and mangle it into an impossible ideal. Until the pressure isn’t just to be pretty but filtered, edited and surgically enhanced to perfection. Until the most lusted after women in our culture look like virtual avatars. And until even the most beautiful among us start to be seen as average.

Now add to all that a world of fully customisable AI girlfriends, each with flawless avatar faces and cartoonish body proportions. Eva AI’s Dream Girl Builder, for example, allows users to personalise every feature of their virtual girlfriend, from face style to butt size. Which could clearly be unhealthy for men who already have warped expectations. But it’s also unhealthy for a generation of girls already hating how they look, suffering with facial and body dysmorphia, and seeking cosmetic surgery in record numbers. Already many girls feel as if they are in constant competition with hyper-sexualised Instagram influencers and infinitely accessible porn stars. Now the next generation will grow up not just with all that but knowing the boys they like can build and sext their ideal woman, and feeling as if they must constantly modify themselves to compete. I find that tragic.


The article discusses the growing trend of AI girlfriends and the potential dangers associated with their proliferation. It mentions that various startups are creating romantic chatbots capable of explicit conversations and sexual content, with millions of users downloading such apps. While much of the concern focuses on the impact on men, the article also highlights the negative consequences this trend may have on women, particularly in terms of unrealistic beauty standards and emotional expectations. The author expresses concerns about young girls feeling pressured to compete with AI girlfriends and the potential harm to self-esteem and body image. The article raises questions about the impact of AI girlfriends on real relationships and emotional intimacy, particularly among younger generations. It concludes with a glimmer of hope that people may eventually reject the artificial in favor of authentic human interactions.

The article raises valid concerns about the proliferation of AI girlfriends and their potential societal impacts. It is indeed troubling to think about the unrealistic beauty and emotional standards that these apps may reinforce, especially among young girls and women. The pressure to conform to these virtual ideals can undoubtedly have damaging effects on self-esteem and mental well-being.

The article also highlights concerns about the potential substitution of real emotional intimacy with AI companions, particularly among a generation that is already grappling with social anxieties and less real-world human interaction. This raises important questions about the long-term consequences of such technologies on relationships and societal dynamics.

However, the article's glimmer of optimism suggests that people may eventually realize the value of authentic, imperfect human interactions. This point is essential, as it underscores the potential for a societal shift away from excessive reliance on AI and towards more genuine connections.

In conclusion, while AI girlfriends may offer convenience and instant gratification, they also pose significant risks to societal norms and emotional well-being. It is crucial for individuals and society as a whole to remain mindful of these potential consequences and prioritize real human connections and authenticity.

Saturday, October 28, 2023

Meaning from movement and stillness: Signatures of coordination dynamics reveal infant agency

Sloan, A. T., Jones, N. A., et al. (2023).
PNAS, 120 (39) e2306732120


How do human beings make sense of their relation to the world and realize their ability to effect change? Applying modern concepts and methods of coordination dynamics, we demonstrate that patterns of movement and coordination in 3 to 4-mo-olds may be used to identify states and behavioral phenotypes of emergent agency. By means of a complete coordinative analysis of baby and mobile motion and their interaction, we show that the emergence of agency can take the form of a punctuated self-organizing process, with meaning found both in movement and stillness.


Revamping one of the earliest paradigms for the investigation of infant learning, and moving beyond reinforcement accounts, we show that the emergence of agency in infants can take the form of a bifurcation or phase transition in a dynamical system that spans the baby, the brain, and the environment. Individual infants navigate functional coupling with the world in different ways, suggesting that behavioral phenotypes of agentive discovery exist—and dynamics provides a means to identify them. This phenotyping method may be useful for identifying babies at risk.

Here is my take:

Importantly, researchers found that the emergence of agency can take the form of a punctuated self-organizing process, with meaning found both in movement and stillness.

The findings of this study suggest that infants are not simply passive observers of the world around them, but rather active participants in their own learning and development. The researchers believe that their work could have implications for the early identification of infants at risk for developmental delays.

Here are some of the key takeaways from the study:
  • Infants learn to make sense of their relation to the world through their movement and interaction with their environment.
  • The emergence of agency is a punctuated, self-organizing process that occurs in both movement and stillness.
  • Individual infants navigate functional coupling with the world in different ways, suggesting that behavioral phenotypes of agentive discovery exist.
  • Dynamics provides a means to identify behavioral phenotypes of agentive discovery, which may be useful for identifying babies at risk.
  • This study is a significant contribution to our understanding of how infants learn and develop. It provides new insights into the role of movement and stillness in the emergence of agency and consciousness. The findings of this study have the potential to improve our ability to identify and support infants at risk for developmental delays.

Friday, October 27, 2023

Theory of consciousness branded 'pseudoscience' by neuroscientists

Clare Wilson
New Scientist
Originally posted 19 Sept 23

Consciousness is one of science’s deepest mysteries; it is considered so difficult to explain how physical entities like brain cells produce subjective sensory experiences, such as the sensation of seeing the colour red, that this is sometimes called “the hard problem” of science.

While the question has long been investigated by studying the brain, IIT came from considering the mathematical structure of information-processing networks and could also apply to animals or artificial intelligence.

It says that a network or system has a higher level of consciousness if it is more densely interconnected, such that the interactions between its connection points or nodes yield more information than if it is reduced to its component parts.

IIT predicts that it is theoretically possible to calculate a value for the level of consciousness, termed phi, of any network with known structure and functioning. But as the number of nodes within a network grows, the sums involved get exponentially bigger, meaning that it is practically impossible to calculate phi for the human brain – or indeed any information-processing network with more than about 10 nodes.


Giulio Tononi at the University of Wisconsin-Madison, who first developed IIT and took part in the recent testing, did not respond to New Scientist’s requests for comment. But Johannes Fahrenfort at VU Amsterdam in the Netherlands, who was not involved in the recent study, says the letter went too far. “There isn’t a lot of empirical support for IIT. But that doesn’t warrant calling it pseudoscience.”

Complicating matters, there is no single definition of pseudoscience. But ITT is not in the same league as astrology or homeopathy, says James Ladyman at the University of Bristol in the UK. “It looks like a serious attempt to understand consciousness. It doesn’t make it a theory pseudoscience just because some people are making exaggerated claims.”


A group of 124 neuroscientists, including prominent figures in the field, have criticized the integrated information theory (IIT) of consciousness in an open letter. They argue that recent experimental evidence said to support IIT didn't actually test its core ideas and is practically impossible to perform. IIT suggests that the level of consciousness, called "phi," can be calculated for any network with known structure and functioning, but this becomes impractical for networks with many nodes, like the human brain. Some critics believe that IIT has been overhyped and may have unintended consequences for policies related to consciousness in fetuses and animals. However, not all experts consider IIT pseudoscience, with some seeing it as a serious attempt to understand consciousness.

The debate surrounding the integrated information theory (IIT) of consciousness is a complex one. While it's clear that the recent experimental evidence has faced criticism for not directly testing the core ideas of IIT, it's important to recognize that the study of consciousness is a challenging and ongoing endeavor.

Consciousness is indeed one of science's profound mysteries, often referred to as "the hard problem." IIT, in its attempt to address this problem, has sparked valuable discussions and research. It may not be pseudoscience, but the concerns raised about overhyping its findings are valid. It's crucial for scientific theories to be communicated accurately to avoid misinterpretation and potential policy implications.

Ultimately, the study of consciousness requires a multidisciplinary approach and the consideration of various theories, and it's important to maintain a healthy skepticism while promoting rigorous scientific inquiry in this complex field.

Thursday, October 26, 2023

The Neuroscience of Trust

Paul J. Zak
Harvard Business Review
Originally posted January-February 2017

Here is an excerpt:

The Return on Trust

After identifying and measuring the managerial behaviors that sustain trust in organizations, my team and I tested the impact of trust on business performance. We did this in several ways. First, we gathered evidence from a dozen companies that have launched policy changes to raise trust (most were motivated by a slump in their profits or market share). Second, we conducted the field experiments mentioned earlier: In two businesses where trust varies by department, my team gave groups of employees specific tasks, gauged their productivity and innovation in those tasks, and gathered very detailed data—including direct measures of brain activity—showing that trust improves performance. And third, with the help of an independent survey firm, we collected data in February 2016 from a nationally representative sample of 1,095 working adults in the U.S. The findings from all three sources were similar, but I will focus on what we learned from the national data since itʼs generalizable.

By surveying the employees about the extent to which firms practiced the eight behaviors, we were able to calculate the level of trust for each organization. (To avoid priming respondents, we never used the word “trust” in surveys.) The U.S. average for organizational trust was 70% (out of a possible 100%). Fully 47% of respondents worked in organizations where trust was below the average, with one firm scoring an abysmally low 15%. Overall, companies scored lowest on recognizing excellence and sharing information (67% and 68%, respectively). So the data suggests that the average U.S. company could enhance trust by
improving in these two areas—even if it didnʼt improve in the other six.

The effect of trust on self-reported work performance was powerful.  Respondents whose companies were in the top quartile indicated they had 106% more energy and were 76% more engaged at work than respondents whose firms were in the bottom quartile. They also reported being 50% more productive
—which is consistent with our objective measures of productivity from studies we have done with employees at work. Trust had a major impact on employee loyalty as well: Compared with employees at low-trust companies, 50% more of those working at high-trust organizations planned to stay with their employer over the next year, and 88% more said they would recommend their company to family and friends as a place to work.

Here is a summary of the key points from the article:
  • Trust is crucial for social interactions and has implications for economic, political, and healthcare outcomes. There are two main types of trust - emotional trust and cognitive trust.
  • Emotional trust develops early in life through attachments and is more implicit, while cognitive trust relies on reasoning and develops later. Both rely on brain regions involved in reward, emotion regulation, understanding others' mental states, and decision making.
  • Oxytocin and vasopressin play key roles in emotional trust by facilitating social bonding and attachment. Disruptions to these systems are linked to social disorders like autism.
  • The prefrontal cortex, amygdala, and striatum are involved in cognitive trust judgments and updating trustworthiness based on new evidence. Damage to prefrontal regions impairs updating of trustworthiness.
  • Trust engages the brain's reward circuitry. Betrayals of trust activate pain and emotion regulation circuits. Trustworthiness cues engage the mentalizing network for inferring others' intentions.
  • Neuroimaging studies show trust engage brain regions involved in reward, emotion regulation, understanding mental states, and decision making. Oxytocin administration increases trusting behavior.
  • Understanding the neuroscience of trust can inform efforts to build trust in healthcare, economic, political, and other social domains. More research is needed on how trust develops over the lifespan.

Wednesday, October 25, 2023

The moral psychology of Artificial Intelligence

Bonnefon, J., Rahwan, I., & Shariff, A.
(2023, September 22). 


Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in Artificial Intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients, or solving moral dilemmas without human supervi- sion. Machines can be as perceived moral patients, whose outcomes can be affected by human decisions, with important consequences for human-machine cooperation. Machines can be moral proxies, that human agents and patients send as their delegates to a moral interaction, or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.


We have not addressed every issue at the intersection of AI and moral psychology. Questions about how people perceive AI plagiarism, about how the presence of AI agents can reduce or enhance trust between groups of humans, about how sexbots will alter intimate human relations, are the subjects of active research programs.  Many more yet unasked questions will only be provoked as new AI  abilities  develops. Given the pace of this change, any review paper will only be a snapshot.  Nevertheless, the very recent and rapid emergence of AI-driven technology is colliding with moral intuitions forged by culture and evolution over the span of millennia.  Grounding an imaginative speculation about the possibilities of AI with a thorough understanding of the structure of human moral psychology will help prepare for a world shared with, and complicated by, machines.

Tuesday, October 24, 2023

The Implications of Diverse Human Moral Foundations for Assessing the Ethicality of Artificial Intelligence

Telkamp, J.B., Anderson, M.H. 
J Bus Ethics 178, 961–976 (2022).


Organizations are making massive investments in artificial intelligence (AI), and recent demonstrations and achievements highlight the immense potential for AI to improve organizational and human welfare. Yet realizing the potential of AI necessitates a better understanding of the various ethical issues involved with deciding to use AI, training and maintaining it, and allowing it to make decisions that have moral consequences. People want organizations using AI and the AI systems themselves to behave ethically, but ethical behavior means different things to different people, and many ethical dilemmas require trade-offs such that no course of action is universally considered ethical. How should organizations using AI—and the AI itself—process ethical dilemmas where humans disagree on the morally right course of action? Though a variety of ethical AI frameworks have been suggested, these approaches do not adequately address how people make ethical evaluations of AI systems or how to incorporate the fundamental disagreements people have regarding what is and is not ethical behavior. Drawing on moral foundations theory, we theorize that a person will perceive an organization’s use of AI, its data procedures, and the resulting AI decisions as ethical to the extent that those decisions resonate with the person’s moral foundations. Since people hold diverse moral foundations, this highlights the crucial need to consider individual moral differences at multiple levels of AI. We discuss several unresolved issues and suggest potential approaches (such as moral reframing) for thinking about conflicts in moral judgments concerning AI.

The article is paywalled, link is above.

Here are some additional points:
  • The article raises important questions about the ethicality of AI systems. It is clear that there is no single, monolithic standard of morality that can be applied to AI systems. Instead, we need to consider a plurality of moral foundations when evaluating the ethicality of AI systems.
  • The article also highlights the challenges of assessing the ethicality of AI systems. It is difficult to measure the impact of AI systems on human well-being, and there is no single, objective way to determine whether an AI system is ethical. However, the article suggests that a pluralistic approach to ethical evaluation, which takes into account a variety of moral perspectives, is the best way to assess the ethicality of AI systems.
  • The article concludes by calling for more research on the implications of diverse human moral foundations for the ethicality of AI. This is an important area of research, and I hope that more research is conducted in this area in the future.

Monday, October 23, 2023

The Knobe Effect From the Perspective of Normative Orders

Waleszczyński,A.,Obidziński,M. & Rejewska, J.
(2018). Studia Humana,7(4) 9-15. 


The characteristic asymmetry in the attribution of intentionality in causing side effects, known as the Knobe effect, is considered to be a stable model of human cognition. This article looks at whether the way of thinking and analysing one scenario may affect the other and whether the mutual relationship between the ways in which both scenarios are analysed may affect the stability of the Knobe effect. The theoretical analyses and empirical studies performed are based on a distinction between moral and non-moral normativity possibly affecting the judgments passed in both scenarios. Therefore, an essential role in judgments about the intentionality of causing a side effect could be played by normative competences responsible for distinguishing between normative orders. 

From the Summary

As to the question asked at the onset of this article, namely, whether the way of thinking about the intentionality of causing a side effect in morally negative situations affects the way of thinking about the intentionality of causing a side effect in morally positive situations, or vice versa, the answer could be as follows. It is very likely that the way of thinking and analysing each of the scenarios depends on the normative order from the perspective of which each particular scenario or sequence of scenarios is considered. At the same time, the results suggest that it is moral normativity that decides the stability of the Knobe effect. Nevertheless, more in-depth empirical and theoretical studies are required in order to analyse the problems discussed in this article more thoroughly. 

My brief explanation:

The Knobe Effect is a phenomenon in experimental philosophy where people are more likely to ascribe intentionality to an action if it has a harmful side effect that the agent could have foreseen, even if the agent did not intend the side effect. This effect is important to psychologists because it sheds light on how people understand intentionality, which is a central concept in psychology.

The Knobe Effect is also important to psychologists because it has implications for our understanding of moral judgment. For example, if people are more likely to blame an agent for a harmful side effect that the agent could have foreseen, even if the agent did not intend the side effect, then this suggests that people may be using moral considerations to inform their judgments of intentionality.

These asymmetrical attributions may be most helpful when working with high conflict couples who interpret harmful messages as intentional, and may minimize helpful and supportive messages (because these are expected).

Sunday, October 22, 2023

What Is Psychological Safety?

Amy Gallo
Harvard Business Review
Originally posted 15 FEB 23

Here are two excerpts:

Why is psychological safety important?

First, psychological safety leads to team members feeling more engaged and motivated, because they feel that their contributions matter and that they’re able to speak up without fear of retribution. Second, it can lead to better decision-making, as people feel more comfortable voicing their opinions and concerns, which often leads to a more diverse range of perspectives being heard and considered. Third, it can foster a culture of continuous learning and improvement, as team members feel comfortable sharing their mistakes and learning from them. (This is what my boss was doing in the opening story.)

All of these benefits — the impact on a team’s performance, innovation, creativity, resilience, and learning — have been proven in research over the years, most notably in Edmondson’s original research and in a study done at Google. That research, known as Project Aristotle, aimed to understand the factors that impacted team effectiveness across Google. Using over 30 statistical models and hundreds of variables, that project concluded that who was on a team mattered less than how the team worked together. And the most important factor was psychological safety.

Further research has shown the incredible downsides of not having psychological safety, including negative impacts on employee well-being, including stress, burnout, and turnover, as well as on the overall performance of the organization.


How do you create psychological safety?

Edmondson is quick to point out that “it’s more magic than science” and it’s important for managers to remember this is “a climate that we co-create, sometimes in mysterious ways.”

Anyone who has worked on a team marked by silence and the inability to speak up, knows how hard it is to reverse that.

A lot of what goes into creating a psychologically safe environment are good management practices — things like establishing clear norms and expectations so there is a sense of predictability and fairness; encouraging open communication and actively listening to employees; making sure team members feel supported; and showing appreciation and humility when people do speak up.

There are a few additional tactics that Edmondson points to as well.

Here are some of my thoughts about psychological safety:
  • It is not the same as comfort. It is okay to feel uncomfortable sometimes, as long as you feel safe to take risks and speak up.
  • It is not about being friends with everyone on your team. It is about creating a respectful and inclusive environment where everyone feels like they can belong.
  • It takes time and effort to build psychological safety. It is not something that happens overnight.

Saturday, October 21, 2023

Should Trackable Pill Technologies Be Used to Facilitate Adherence Among Patients Without Insight?

Tahir Rahman
AMA J Ethics. 2019;21(4):E332-336.
doi: 10.1001/amajethics.2019.332.


Aripiprazole tablets with sensor offer a new wireless trackable form of aripiprazole that represents a clear departure from existing drug delivery systems, routes, or formulations. This tracking technology raises concerns about the ethical treatment of patients with psychosis when it could introduce unintended treatment challenges. The use of “trackable” pills and other “smart” drugs or nanodrugs assumes renewed importance given that physicians are responsible for determining patients’ decision-making capacity. Psychiatrists are uniquely positioned in society to advocate on behalf of vulnerable patients with mental health disorders. The case presented here focuses on guidance for capacity determination and informed consent for such nanodrugs.


Ethics and Nanodrug Prescribing

Clinicians often struggle with improving treatment adherence in patients with psychosis who lack insight and decision-making capacity, so trackable nanodrugs, even though not proven to improve compliance, are worth considering. At the same time, guidelines are lacking to help clinicians determine which patients are appropriate for trackable nanodrug prescribing. The introduction of an actual tracking device in a patient who suffers from delusions of an imagined tracking device, like Mr A, raises specific ethical concerns. Clinicians have widely accepted the premise that confronting delusions is countertherapeuti The introduction of trackable pill technology could similarly introduce unintended harms. Paul Appelbaum has argued that “with paranoid patients often worried about being monitored or tracked, giving them a pill that does exactly that is an odd approach to treatment. The fear of invasion of privacy might discourage some patients from being compliant with their medical care and thus foster distrust of all psychiatric services. A good therapeutic relationship (often with family, friends, or a guardian involved) is critical to the patient’s engaging in ongoing psychiatric services.

The use of trackable pill technology to improve compliance deserves further scrutiny, as continued reliance on informal, physician determinations of decision-making capacity remain a standard practice. Most patients are not yet accustomed to the idea of ingesting a trackable pill. Therefore, explanation of the intervention must be incorporated into the informed consent process, assuming the patient has decision-making capacity. Since patients may have concerns about the collected data being stored on a device, clinicians might have to answer questions regarding potential breaches of confidentiality. They will also have to contend with clinical implications of acquiring patient treatment compliance data and justifying decisions based on such information. Below is a practical guide to aid clinicians in appropriate use of this technology.

Friday, October 20, 2023

Competition and moral behavior: A meta-analysis of forty-five crowd-sourced experimental designs

Huber, C., Dreber, A., et al. (2023).
PNAS of the United States of America, 120(23).


Does competition affect moral behavior? This fundamental question has been debated among leading scholars for centuries, and more recently, it has been tested in experimental studies yielding a body of rather inconclusive empirical evidence. A potential source of ambivalent empirical results on the same hypothesis is design heterogeneity—variation in true effect sizes across various reasonable experimental research protocols. To provide further evidence on whether competition affects moral behavior and to examine whether the generalizability of a single experimental study is jeopardized by design heterogeneity, we invited independent research teams to contribute experimental designs to a crowd-sourced project. In a large-scale online data collection, 18,123 experimental participants were randomly allocated to 45 randomly selected experimental designs out of 95 submitted designs. We find a small adverse effect of competition on moral behavior in a meta-analysis of the pooled data. The crowd-sourced design of our study allows for a clean identification and estimation of the variation in effect sizes above and beyond what could be expected due to sampling variance. We find substantial design heterogeneity—estimated to be about 1.6 times as large as the average standard error of effect size estimates of the 45 research designs—indicating that the informativeness and generalizability of results based on a single experimental design are limited. Drawing strong conclusions about the underlying hypotheses in the presence of substantive design heterogeneity requires moving toward much larger data collections on various experimental designs testing the same hypothesis.


Using experiments involves leeway in choosing one out of many possible experimental designs. This choice constitutes a source of uncertainty in estimating the underlying effect size which is not incorporated into common research practices. This study presents the results of a crowd-sourced project in which 45 independent teams implemented research designs to address the same research question: Does competition affect moral behavior? We find a small adverse effect of competition on moral behavior in a meta-analysis involving 18,123 experimental participants. Importantly, however, the variation in effect size estimates across the 45 designs is substantially larger than the variation expected due to sampling errors. This “design heterogeneity” highlights that the generalizability and informativeness of individual experimental designs are limited.

Here are some of the key takeaways from the research:
  • Competition can have a small, but significant, negative effect on moral behavior.
  • This effect is likely due to the fact that competition can lead to people being more self-interested and less concerned about the well-being of others.
  • The findings of this research have important implications for our understanding of how competition affects moral behavior.

Thursday, October 19, 2023

10 Things Your Corporate Culture Needs to Get Right

D. Sull and C. Sull
MIT Sloan Management Review
Originally posted 16 September 21

Here are two excerpts:

What distinguishes a good corporate culture from a bad one in the eyes of employees? This is a trickier question than it might appear at first glance. Most leaders agree in principle that culture matters but have widely divergent views about which elements of culture are most important. In an earlier study, we identified more than 60 distinct values that companies listed among their official “core values.” Most often, an organization’s official core values signal top executives’ cultural aspirations, rather than reflecting the elements of corporate culture that matter most to employees.

Which elements of corporate life shape how employees rate culture? To address this question, we analyzed the language workers used to describe their employers. When they complete a Glassdoor review, employees not only rate corporate culture on a 5-point scale, but also describe — in their own words — the pros and cons of working at their organization. The topics they choose to write about reveal which factors are most salient to them, and sentiment analysis reveals how positively (or negatively) they feel about each topic. (Glassdoor reviews are remarkably balanced between positive and negative observations.) By analyzing the relationship between their descriptions and rating of culture, we can start to understand what employees are talking about when they talk about culture.


The following chart summarizes the factors that best predict whether employees love (or loathe) their companies. The bars represent each topic’s relative importance in predicting a company’s culture rating. Whether employees feel respected, for example, is 18 times more powerful as a predictor of a company’s culture rating compared with the average topic. We’ve grouped related factors to tease out broader themes that emerge from our analysis.

Here are the 10 cultural dynamics and my take
  1. Employees feel respected. Employees want to be treated with consideration, courtesy, and dignity. They want their perspectives to be taken seriously and their contributions to be valued.
  2. Employees have supportive leaders. Employees need leaders who will help them to do their best work, respond to their requests, accommodate their individual needs, offer encouragement, and have their backs.
  3. Leaders live core values. Employees need to see that their leaders are committed to the company's core values and that they are willing to walk the talk.
  4. Toxic managers. Toxic managers can create a poisonous work environment and lead to high turnover rates and low productivity.
  5. Unethical behavior. Employees need to have confidence that their colleagues and leaders are acting ethically and honestly.
  6. Employees have good benefits. Employees expect to be compensated fairly and to have access to a good benefits package.
  7. Perks. Perks can be anything from free snacks to on-site childcare to flexible work arrangements. They can help to make the workplace more enjoyable and improve employee morale.
  8. Employees have opportunities for learning and development. Employees want to grow and develop in their careers. They need to have access to training and development opportunities that will help them to reach their full potential.
  9. Job security. Employees need to feel secure in their jobs in order to focus on their work and be productive.
  10. Reorganizations. How employees view reorganizations, including frequency and quality.
The authors argue that these ten elements are essential for creating a corporate culture that is attractive to top talent, drives innovation and productivity, and leads to long-term success.

Additional thoughts

In addition to the ten elements listed above, there are a number of other factors that can contribute to a strong and positive corporate culture. These include:
  • Diversity and inclusion. Employees want to work in a company where they feel respected and valued, regardless of their race, ethnicity, gender, sexual orientation, or other factors.
  • Collaboration and teamwork. Employees want to work in a company where they can collaborate with others and achieve common goals.
  • Open communication and feedback. Employees need to feel comfortable communicating with their managers and colleagues, and they need to be open to receiving feedback.
  • Celebration of success. It is important to celebrate successes and recognize employees for their contributions. This helps to create a positive and supportive work environment.
  • By investing in these factors, companies can create a corporate culture that is both attractive to employees and beneficial to the bottom line.

Wednesday, October 18, 2023

Responsible Agency and the Importance of Moral Audience

Jefferson, A., & Sifferd, K. 
Ethic Theory Moral Prac 26, 361–375 (2023).


Ecological accounts of responsible agency claim that moral feedback is essential to the reasons-responsiveness of agents. In this paper, we discuss McGeer’s scaffolded reasons-responsiveness account in the light of two concerns. The first is that some agents may be less attuned to feedback from their social environment but are nevertheless morally responsible agents – for example, autistic people. The second is that moral audiences can actually work to undermine reasons-responsiveness if they espouse the wrong values. We argue that McGeer’s account can be modified to handle both problems. Once we understand the specific roles that moral feedback plays for recognizing and acting on moral reasons, we can see that autistics frequently do rely on such feedback, although it often needs to be more explicit. Furthermore, although McGeer is correct to highlight the importance of moral feedback, audience sensitivity is not all that matters to reasons-responsiveness; it needs to be tempered by a consistent application of moral rules. Agents also need to make sure that they choose their moral audiences carefully, paying special attention to receiving feedback from audiences which may be adversely affected by their actions.

Here is my take:

Responsible agency is the ability to act on the right moral reasons, even when it is difficult or costly. Moral audience is the group of people whose moral opinions we care about and respect.

According to the authors, moral audience plays a crucial role in responsible agency in two ways:
  1. It helps us to identify and internalize the right moral reasons. We learn about morality from our moral audience, and we are more likely to act on moral reasons if we know that our audience would approve of our actions.
  2. It provides us with motivation to act on moral reasons. We are more likely to do the right thing if we know that our moral audience will be disappointed in us if we don't.
The authors argue that moral audience is particularly important for responsible agency in novel contexts, where we may not have clear guidance from existing moral rules or norms. In these situations, we need to rely on our moral audience to help us to identify and act on the right moral reasons.

The authors also discuss some of the challenges that can arise when we are trying to identify and act on the right moral reasons. For example, our moral audience may have different moral views than we do, or they may be biased in some way. In these cases, we need to be able to critically evaluate our moral audience's views and make our own judgments about what is right and wrong.

Overall, the article makes a strong case for the importance of moral audience in developing and maintaining responsible agency. It is important to have a group of people whose moral opinions we care about and respect, and to be open to their feedback. This can help us to become more morally responsible agents.

Tuesday, October 17, 2023

Tackling healthcare AI's bias, regulatory and inventorship challenges

Bill Siwicki
Healthcare IT News
Originally posted 29 August 23

While AI adoption is increasing in healthcare, there are privacy and content risks that come with technology advancements.

Healthcare organizations, according to Dr. Terri Shieh-Newton, an immunologist and a member at global law firm Mintz, must have an approach to AI that best positions themselves for growth, including managing:
  • Biases introduced by AI. Provider organizations must be mindful of how machine learning is integrating racial diversity, gender and genetics into practice to support the best outcome for patients.
  • Inventorship claims on intellectual property. Identifying ownership of IP as AI begins to develop solutions in a faster, smarter way compared to humans.
Healthcare IT News sat down with Shieh-Newton to discuss these issues, as well as the regulatory landscape’s response to data and how that impacts AI.

Q. Please describe the generative AI challenge with biases introduced from AI itself. How is machine learning integrating racial diversity, gender and genetics into practice?
A. Generative AI is a type of machine learning that can create new content based on the training of existing data. But what happens when that training set comes from data that has inherent bias? Biases can appear in many forms within AI, starting from the training set of data.

Take, as an example, a training set of patient samples already biased if the samples are collected from a non-diverse population. If this training set is used for discovering a new drug, then the outcome of the generative AI model can be a drug that works only in a subset of a population – or have just a partial functionality.

Some traits of novel drugs are better binding to its target and lower toxicity. If the training set excludes a population of patients of a certain gender or race (and the genetic differences that are inherent therein), then the outcome of proposed drug compounds is not as robust as when the training sets include a diversity of data.

This leads into questions of ethics and policies, where the most marginalized population of patients who need the most help could be the group that is excluded from the solution because they were not included in the underlying data used by the generative AI model to discover that new drug.

One can address this issue with more deliberate curation of the training databases. For example, is the patient population inclusive of many types of racial backgrounds? Gender? Age ranges?

By making sure there is a reasonable representation of gender, race and genetics included in the initial training set, generative AI models can accelerate drug discovery, for example, in a way that benefits most of the population.

The info is here. 

Here is my take:

 One of the biggest challenges is bias. AI systems are trained on data, and if that data is biased, the AI system will be biased as well. This can have serious consequences in healthcare, where biased AI systems could lead to patients receiving different levels of care or being denied care altogether.

Another challenge is regulation. Healthcare is a highly regulated industry, and AI systems need to comply with a variety of laws and regulations. This can be complex and time-consuming, and it can be difficult for healthcare organizations to keep up with the latest changes.

Finally, the article discusses the challenges of inventorship. As AI systems become more sophisticated, it can be difficult to determine who is the inventor of a new AI-powered healthcare solution. This can lead to disputes and delays in bringing new products and services to market.

The article concludes by offering some suggestions for how to address these challenges:
  • To reduce bias, healthcare organizations need to be mindful of the data they are using to train their AI systems. They should also audit their AI systems regularly to identify and address any bias.
  • To comply with regulations, healthcare organizations need to work with experts to ensure that their AI systems meet all applicable requirements.
  • To resolve inventorship disputes, healthcare organizations should develop clear policies and procedures for allocating intellectual property rights.
By addressing these challenges, healthcare organizations can ensure that AI is deployed in a way that is safe, effective, and ethical.

Additional thoughts

In addition to the challenges discussed in the article, there are a number of other factors that need to be considered when deploying AI in healthcare. For example, it is important to ensure that AI systems are transparent and accountable. This means that healthcare organizations should be able to explain how their AI systems work and why they make the decisions they do.

It is also important to ensure that AI systems are fair and equitable. This means that they should treat all patients equally, regardless of their race, ethnicity, gender, income, or other factors.

Finally, it is important to ensure that AI systems are used in a way that respects patient privacy and confidentiality. This means that healthcare organizations should have clear policies in place for the collection, use, and storage of patient data.

By carefully considering all of these factors, healthcare organizations can ensure that AI is used to improve patient care and outcomes in a responsible and ethical way.

Monday, October 16, 2023

Why Every Leader Needs to Worry About Toxic Culture

D. Sull, W. Cipolli, & C. Brighenti
MIT Sloan Management Review
Originally posted 16 March 22

Here is an excerpt:

The High Costs of a Toxic Culture

By identifying the core elements of a toxic culture, we can synthesize existing research on closely related topics, including discrimination, abusive managers, unethical organizational behavior, workplace injustice, and incivility. This research allows us to tally the full cost of a toxic culture to individuals and organizations. And the toll, in human suffering and financial expenses, is staggering.

A large body of research shows that working in a toxic atmosphere is associated with elevated levels of stress, burnout, and mental health issues. Toxicity also translates into physical illness. When employees experience injustice in the workplace, their odds of suffering a major disease (including coronary disease, asthma, diabetes, and arthritis) increase by 35% to 55%.

In addition to the pain imposed on employees, a toxic culture also imposes costs that flow directly to the organization’s bottom line. When a toxic atmosphere makes workers sick, for example, their employer typically foots the bill. Among U.S. workers with health benefits, two-thirds have their health care expenses paid directly by their employer. By one estimate, toxic workplaces added an incremental $16 billion in employee health care costs in 2008. The figure below summarizes some of the costs of a toxic culture for organizations.

According to a study from the Society for Human Resource Management, 1 in 5 employees left a job at some point in their career because of its toxic culture. That survey, conducted before the pandemic, is consistent with our findings that a toxic culture is the best predictor of a company experiencing higher employee attrition than its industry overall during the first six months of the Great Resignation. Gallup estimates that the cost of replacing an employee who quits can total up to two times their annual salary when all direct and indirect expenses are accounted for.

Companies with a toxic culture will not only lose employees — they’ll also struggle to replace workers who jump ship. Over three-quarters of job seekers research an employer’s culture before applying for a job. In an age of online employee reviews, companies cannot keep their culture problems a secret for long, and a toxic culture, as we showed above, is by far the strongest predictor of a low review on Glassdoor. Having a toxic employer brand makes it harder to attract candidates.

Here is my take:

The article identifies five attributes that have a disproportionate impact on how workers view a toxic culture:
  • Disrespectful
  • Noninclusive
  • Unethical
  • Cutthroat
  • Abusive
Leaders play a pivotal role in shaping and maintaining a positive work culture. They must be aware of the impact of toxic culture and actively work towards building a healthy and supportive environment.

To tackle toxic culture, leaders must first identify the behaviors and practices that contribute to it. Common toxic behaviors include micromanagement, lack of transparency, favoritism, excessive competition, and poor communication. Once the root causes of the problem have been identified, leaders can develop strategies to address them.

The article provides a number of recommendations for leaders to create a positive work culture, including:
  • Setting clear expectations for behavior and holding employees accountable
  • Fostering a culture of trust and respect
  • Promoting diversity and inclusion
  • Providing employees with opportunities for growth and development
  • Creating a work-life balance
  • Leaders who are committed to creating a positive work culture will see the benefits reflected in their team's performance and the organization's bottom line.

Sunday, October 15, 2023

Bullshit blind spots: the roles of miscalibration and information processing in bullshit detection

Shane Littrell & Jonathan A. Fugelsang
(2023) Thinking & Reasoning
DOI: 10.1080/13546783.2023.2189163


The growing prevalence of misleading information (i.e., bullshit) in society carries with it an increased need to understand the processes underlying many people’s susceptibility to falling for it. Here we report two studies (N = 412) examining the associations between one’s ability to detect pseudo-profound bullshit, confidence in one’s bullshit detection abilities, and the metacognitive experience of evaluating potentially misleading information. We find that people with the lowest (highest) bullshit detection performance overestimate (underestimate) their detection abilities and overplace (underplace) those abilities when compared to others. Additionally, people reported using both intuitive and reflective thinking processes when evaluating misleading information. Taken together, these results show that both highly bullshit-receptive and highly bullshit-resistant people are largely unaware of the extent to which they can detect bullshit and that traditional miserly processing explanations of receptivity to misleading information may be insufficient to fully account for these effects.

Here's my summary:

The authors of the article argue that people have two main blind spots when it comes to detecting bullshit: miscalibration and information processing. Miscalibration is the tendency to overestimate our ability to detect bullshit. We think we're better at detecting bullshit than we actually are.

Information processing is the way that we process information in order to make judgments. The authors argue that we are more likely to be fooled by bullshit when we are not paying close attention or when we are processing information quickly.

The authors also discuss some strategies for overcoming these blind spots. One strategy is to be aware of our own biases and limitations. We should also be critical of the information that we consume and take the time to evaluate evidence carefully.

Overall, the article provides a helpful framework for understanding the challenges of bullshit detection. It also offers some practical advice for overcoming these challenges.

Here are some additional tips for detecting bullshit:
  • Be skeptical of claims that seem too good to be true.
  • Look for evidence to support the claims that are being made.
  • Be aware of the speaker or writer's motives.
  • Ask yourself if the claims are making sense and whether they are consistent with what you already know.
  • If you're not sure whether something is bullshit, it's better to err on the side of caution and be skeptical.

Saturday, October 14, 2023

Overconfidently conspiratorial: Conspiracy believers are dispositionally overconfident and massively overestimate how much others agree with them

Pennycook, G., Binnendyk, J., & Rand, D. G. 
(2022, December 5). PsyArXiv


There is a pressing need to understand belief in false conspiracies. Past work has focused on the needs and motivations of conspiracy believers, as well as the role of overreliance on intuition. Here, we propose an alternative driver of belief in conspiracies: overconfidence. Across eight studies with 4,181 U.S. adults, conspiracy believers not only relied more intuition, but also overestimated their performance on numeracy and perception tests (i.e. were overconfident in their own abilities). This relationship with overconfidence was robust to controlling for analytic thinking, need for uniqueness, and narcissism, and was strongest for the most fringe conspiracies. We also found that conspiracy believers – particularly overconfident ones – massively overestimated (>4x) how much others agree with them: Although conspiracy beliefs were in the majority in only 12% of 150 conspiracies across three studies, conspiracy believers thought themselves to be in the majority 93% of the time.

Here is my summary:

The research found that people who believe in conspiracy theories are more likely to be overconfident in their own abilities and to overestimate how much others agree with them. This was true even when controlling for other factors, such as analytic thinking, need for uniqueness, and narcissism.

The researchers conducted a series of studies to test their hypothesis. In one study, they found that people who believed in conspiracy theories were more likely to overestimate their performance on numeracy and perception tests. In another study, they found that people who believed in conspiracy theories were more likely to overestimate how much others agreed with them about a variety of topics, including climate change and the 2016 US presidential election.

The researchers suggest that overconfidence may play a role in the formation and maintenance of conspiracy beliefs. When people are overconfident, they are more likely to dismiss evidence that contradicts their beliefs and to seek out information that confirms their beliefs. This can lead to a "filter bubble" effect, where people are only exposed to information that reinforces their existing beliefs.

The researchers also suggest that overconfidence may lead people to overestimate how much others agree with them about their conspiracy beliefs. This can make them feel more confident in their beliefs and less likely to question them.

The findings of this research have implications for understanding and addressing the spread of conspiracy theories. It is important to be aware of the role that overconfidence may play in the formation and maintenance of conspiracy beliefs. This knowledge can be used to develop more effective interventions to prevent people from falling for conspiracy theories and to help people who already believe in conspiracy theories to critically evaluate their beliefs.

Friday, October 13, 2023

Humans Have Crossed 6 of 9 ‘Planetary Boundaries’

Meghan Bartles
Scientific American
Originally posted 13 September 23

Here is an excerpt:

The new study marks the second update since the 2009 paper and the first time scientists have included numerical guideposts for each boundary—a very significant development. “What is novel about this paper is: it’s the first time that all nine boundaries have been quantified,” says Rak Kim, an environmental social scientist at Utrecht University in the Netherlands, who wasn’t involved in the new study.

Since its initial presentation, the planetary boundaries model has drawn praise for presenting the various intertwined factors—beyond climate change alone—that influence Earth’s habitability. Carbon dioxide levels are included in the framework, of course, but so are biodiversity loss, chemical pollution, changes in the use of land and fresh water and the presence of the crucial elements nitrogen and phosphorus. None of these boundaries stands in isolation; for example, land use changes can affect biodiversity, and carbon dioxide affects ocean acidification, among other connections.

“It’s very easy to think about: there are eight, nine boundaries—but I think it’s a challenge to explain to people how these things interact,” says political scientist Victor Galaz of the Stockholm Resilience Center, a joint initiative of Stockholm University and the Beijer Institute of Ecological Economics at the Royal Swedish Academy of Sciences, who focuses on climate governance and wasn’t involved in the new research. “You pull on one end, and actually you’re affecting something else. And I don’t think people really understand that.”

Although the nine overall factors themselves are the same as those first identified in the 2009 paper, researchers on the projects have fine-tuned some of these boundaries’ details. “This most recent iteration has done a very nice job of fleshing out more and more data—and, more and more quantitatively, where we sit with respect to those boundaries,” says Jonathan Foley, executive director of Project Drawdown, a nonprofit organization that develops roadmaps for climate solutions. Foley was a co-author on the original 2009 paper but was not involved in the new research.

Still, the overall verdict remains the same as it was nearly 15 years ago. “It’s pretty alarming: We’re living on a planet unlike anything any humans have seen before,” Foley says. (Humans are also struggling to meet the United Nations’ 17 Sustainable Development Goals, which are designed to address environmental and societal challenges, such as hunger and gender inequality, in tandem.)

Here is my summary:

Planetary boundaries are the limits within which humanity can operate without causing irreversible damage to the Earth's ecosystems. The six boundaries that have been crossed are:
  • Climate change
  • Biosphere integrity
  • Land use and system change
  • Nitrogen and phosphorus flows
  • Freshwater use
  • Atmospheric aerosol loading
The study found that these boundaries have been crossed due to a combination of factors, including population growth, economic development, and unsustainable consumption patterns. The authors of the study warn that crossing these planetary boundaries could have serious consequences for human health and well-being.

The article also discusses the implications of the study's findings for policymakers and businesses. The authors argue that we need to make a fundamental shift in the way we live and produce goods and services in order to stay within the planetary boundaries. This will require investments in renewable energy, sustainable agriculture, and other technologies that can help us to decouple economic growth from environmental damage.

Overall, the article provides a sobering assessment of the state of the planet. It is clear that we need to take urgent action to address the environmental challenges that we face.

Thursday, October 12, 2023

Patients need doctors who look like them. Can medicine diversify without affirmative action?

Kat Stafford
Originally posted 11 September 23

Here are two excerpts:

But more than two months after the Supreme Court struck down affirmative action in college admissions, concerns have arisen that a path into medicine may become much harder for students of color. Heightening the alarm: the medical field’s reckoning with longstanding health inequities.

Black Americans represent 13% of the U.S. population, yet just 6% of U.S. physicians are Black. Increasing representation among doctors is one solution experts believe could help disrupt health inequities.

The disparities stretch from birth to death, often beginning before Black babies take their first breath, a recent Associated Press series showed. Over and over, patients said their concerns were brushed aside or ignored, in part because of unchecked bias and racism within the medical system and a lack of representative care.

A UCLA study found the percentage of Black doctors had increased just 4% from 1900 to 2018.

But the affirmative action ruling dealt a “serious blow” to the medical field’s goals of improving that figure, the American Medical Association said, by prohibiting medical schools from considering race among many factors in admissions. The ruling, the AMA said, “will reverse gains made in the battle against health inequities.”

The consequences could affect Black health for generations to come, said Dr. Uché Blackstock, a New York emergency room physician and author of “LEGACY: A Black Physician Reckons with Racism in Medicine.”


“As medical professionals, any time we see disparities in care or outcomes of any kind, we have to look at the systems in which we are delivering care and we have to look at ways that we are falling short,” Wysong said.

Without affirmative action as a tool, career programs focused on engaging people of color could grow in importance.

For instance, the Pathways initiative engages students from Black, Latino and Indigenous communities from high school through medical school.

The program starts with building interest in dermatology as a career and continues to scholarships, workshops and mentorship programs. The goal: Increase the number of underrepresented dermatology residents from about 100 in 2022 to 250 by 2027, and grow the share of dermatology faculty who are members of color by 2%.

Tolliver credits her success in becoming a dermatologist in part to a scholarship she received through Ohio State University’s Young Scholars Program, which helps talented, first-generation Ohio students with financial need. The scholarship helped pave the way for medical school, but her involvement in the Pathways residency program also was central.

Wednesday, October 11, 2023

The Best-Case Heuristic: 4 Studies of Relative Optimism, Best-Case, Worst-Case, & Realistic Predictions in Relationships, Politics, & a Pandemic

Sjåstad, H., & Van Bavel, J. (2023).
Personality and Social Psychology Bulletin, 0(0).


In four experiments covering three different life domains, participants made future predictions in what they considered the most realistic scenario, an optimistic best-case scenario, or a pessimistic worst-case scenario (N = 2,900 Americans). Consistent with a best-case heuristic, participants made “realistic” predictions that were much closer to their best-case scenario than to their worst-case scenario. We found the same best-case asymmetry in health-related predictions during the COVID-19 pandemic, for romantic relationships, and a future presidential election. In a fully between-subject design (Experiment 4), realistic and best-case predictions were practically identical, and they were naturally made faster than the worst-case predictions. At least in the current study domains, the findings suggest that people generate “realistic” predictions by leaning toward their best-case scenario and largely ignoring their worst-case scenario. Although political conservatism was correlated with lower covid-related risk perception and lower support of early public-health interventions, the best-case prediction heuristic was ideologically symmetric.

Here is my summary:

This research examined how people make predictions about the future in different life domains, such as health, relationships, and politics. The researchers found that people tend to make predictions that are closer to their best-case scenario than to their worst-case scenario, even when asked to make a "realistic" prediction. This is known as the best-case heuristic.

The researchers conducted four experiments to test the best-case heuristic. In the first experiment, participants were asked to make predictions about their risk of getting COVID-19, their satisfaction with their romantic relationship in one year, and the outcome of the next presidential election. Participants were asked to make three predictions for each event: a best-case scenario, a worst-case scenario, and a realistic scenario. The results showed that participants' "realistic" predictions were much closer to their best-case predictions than to their worst-case predictions.

The researchers found the same best-case asymmetry in the other three experiments, which covered a variety of life domains, including health, relationships, and politics. The findings suggest that people use a best-case heuristic when making predictions about the future, even in serious and important matters.

The best-case heuristic has several implications for individuals and society. On the one hand, it can help people to maintain a positive outlook on life and to cope with difficult challenges. On the other hand, it can also lead to unrealistic expectations and to a failure to plan for potential problems.

Overall, the research on the best-case heuristic suggests that people's predictions about the future are often biased towards optimism. This is something to be aware of when making important decisions and when planning for the future.

Tuesday, October 10, 2023

The Moral Case for No Longer Engaging With Elon Musk’s X

David Lee
Originally published 5 October 23

Here is an excerpt:

Social networks are molded by the incentives presented to users. In the same way we can encourage people to buy greener cars with subsidies or promote healthy living by giving out smartwatches, so, too, can levers be pulled to improve the health of online life. Online, people can’t be told what to post, but sites can try to nudge them toward behaving in a certain manner, whether through design choices or reward mechanisms.

Under the previous management, Twitter at least paid lip service to this. In 2020, it introduced a feature that encouraged people to actually read articles before retweeting them, for instance, to promote “informed discussion.” Jack Dorsey, the co-founder and former chief executive officer, claimed to be thinking deeply about improving the quality of conversations on the platform — seeking ways to better measure and improve good discourse online. Another experiment was hiding the “likes” count in an attempt to train away our brain’s yearn for the dopamine hit we get from social engagement.

One thing the prior Twitter management didn’t do is actively make things worse. When Musk introduced creator payments in July, he splashed rocket fuel over the darkest elements of the platform. These kinds of posts always existed, in no small number, but are now the despicable main event. There’s money to be made. X’s new incentive structure has turned the site into a hive of so-called engagement farming — posts designed with the sole intent to elicit literally any kind of response: laughter, sadness, fear. Or the best one: hate. Hate is what truly juices the numbers.

The user who shared the video of Carson’s attack wasn’t the only one to do it. But his track record on these kinds of posts, and the inflammatory language, primed it to be boosted by the algorithm. By Tuesday, the user was still at it, making jokes about Carson’s girlfriend. All content monetized by advertising, which X desperately needs. It’s no mistake, and the user’s no fringe figure. In July, he posted that the site had paid him more than $16,000. Musk interacts with him often.

Here's my take: 

Lee pointed out that social networks can shape user behavior through incentives, and the previous management of Twitter had made some efforts to promote healthier online interactions. However, under Elon Musk's management, the platform has taken a different direction, actively encouraging provocative and hateful content to boost engagement.

Lee criticized the new incentive structure on X, where users are financially rewarded for producing controversial content. They argued that as the competition for attention intensifies, the content will likely become more violent and divisive.

Lee also mentioned an incident involving former executive Yoel Roth, who raised concerns about hate speech on the platform, and Musk's dismissive response to those concerns.  Musk is not a business genius and does not understand how to promote a healthy social media site.