Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, July 31, 2023

Top Arkansas psychiatrist accused of falsely imprisoning patients and Medicaid fraud

Laura Strickler & Stephanie Gosk
Originally posted July 23, 2023

Here is an excerpt:

The man who led the unit at the time, Dr. Brian Hyatt, was one of the most prominent psychiatrists in Arkansas and the chairman of the board that disciplines physicians. But he’s now under investigation by state and federal authorities who are probing allegations ranging from Medicaid fraud to false imprisonment.

VanWhy’s release marked the second time in two months that a patient was released from Hyatt’s unit only after a sheriff’s deputy showed up with a court order, according to court records.

“I think that they were running a scheme to hold people as long as possible, to bill their insurance as long as possible before kicking them out the door, and then filling the bed with someone else,” said Aaron Cash, a lawyer who represents VanWhy.

VanWhy and at least 25 other former patients have sued Hyatt, alleging that they were held against their will in his unit for days and sometimes weeks. And Arkansas Attorney General Tim Griffin’s office has accused Hyatt of running an insurance scam, claiming to treat patients he rarely saw and then billing Medicaid at “the highest severity code on every patient,” according to a search warrant affidavit.

As the lawsuits piled up, Hyatt remained chairman of the Arkansas State Medical Board. But he resigned from the board in late May after Drug Enforcement Administration agents executed a search warrant at his private practice. 

“I am not resigning because of any wrongdoing on my part but so that the Board may continue its important work without delay or distraction,” he wrote in a letter. “I will continue to defend myself in the proper forum against the false allegations being made against me.”

Northwest Medical Center in Springdale “abruptly terminated” Hyatt’s contract in May 2022, according to the attorney general’s search warrant affidavit. 

In April, the hospital agreed to pay $1.1 million in a settlement with the Arkansas Attorney General’s Office. Northwest Medical Center could not provide sufficient documentation that justified the hospitalization of 246 patients who were held in Hyatt’s unit, according to the attorney general’s office. 

As part of the settlement, the hospital denied any wrongdoing.

Sunday, July 30, 2023

Social influences in the digital era: When do people conform more to a human being or an artificial intelligence?

Riva, P., Aureli, N., & Silvestrini, F. 
(2022). Acta Psychologica, 229, 103681. 


The spread of artificial intelligence (AI) technologies in ever-widening domains (e.g., virtual assistants) increases the chances of daily interactions between humans and AI. But can non-human agents influence human beings and perhaps even surpass the power of the influence of another human being? This research investigated whether people faced with different tasks (objective vs. subjective) could be more influenced by the information provided by another human being or an AI. We expected greater AI (vs. other humans) influence in objective tasks (i.e., based on a count and only one possible correct answer). By contrast, we expected greater human (vs. AI) influence in subjective tasks (based on attributing meaning to evocative images). In Study 1, participants (N = 156) completed a series of trials of an objective task to provide numerical estimates of the number of white dots pictured on black backgrounds. Results showed that participants conformed more with the AI's responses than the human ones. In Study 2, participants (N = 102) in a series of subjective tasks observed evocative images associated with two concepts ostensibly provided, again, by an AI or a human. Then, they rated how each concept described the images appropriately. Unlike the objective task, in the subjective one, participants conformed more with the human than the AI's responses. Overall, our findings show that under some circumstances, AI can influence people above and beyond the influence of other humans, offering new insights into social influence processes in the digital era.


Our research might offer new insights into social influence processes in the digital era. The results showed that people can conform more to non-human agents (than human ones) in a digital context under specific circumstances. For objective tasks eliciting uncertainty, people might be more prone to conform to AI agents than another human being, whereas for subjective tasks, other humans may continue to be the most credible source of influence compared with AI agents. These findings highlight the relevance of matching agents and the type of task to maximize social influence. Our findings could be important for non-human agent developers, showing under which circumstances a human is more prone to follow the guidance of non-human agents. Proposing a non-human agent in a task in which it is not so trusted could be suboptimal. Conversely, in objective-type tasks that elicit uncertainty, it might be advantageous to emphasize the nature of the agent as artificial intelligence, rather than trying to disguise the agent as human (as some existing chatbots tend to do). In conclusion, it is important to consider, on the one hand, that non-human agents can become credible sources of social influence and, on the other hand, the match between the type of agent and the type of task.


The first study found that people conformed more to AI than to human sources on objective tasks, such as estimating the number of white dots on a black background. The second study found that people conformed more to human than to AI sources on subjective tasks, such as attributing meaning to evocative images.

The authors conclude that the findings of their studies suggest that AI can be a powerful source of social influence, especially on objective tasks. However, they also note that the literature on AI and social influence is still limited, and more research is needed to understand the conditions under which AI can be more or less influential than human sources.

Key points:
  • The spread of AI technologies is increasing the chances of daily interactions between humans and AI.
  • Research has shown that people can be influenced by AI on objective tasks, but they may be more influenced by humans on subjective tasks.
  • More research is needed to understand the conditions under which AI can be more or less influential than human sources.

Saturday, July 29, 2023

Racism in the Hands of an Angry God: How Image of God Impacts Cultural Racism in Relation to Police Treatment of African Americans

Lauve‐Moon, T. A., & Park, J. Z. (2023).
Journal for the Scientific Study of Religion.


Previous research suggests an angry God image is a narrative schema predicting support for more punitive forms of criminal justice. However, this research has not explored the possibility that racialization may impact one's God image. We perform logistic regression on Wave V of the Baylor Religion Survey to examine the correlation between an angry God image and the belief that police shoot Blacks more often because Blacks are more violent than Whites (a context-specific form of cultural racism). Engaging critical insights from intersectionality theory, we also interact angry God image with both racialized identity and racialized religious tradition. Results suggest that the angry God schema is associated with this form of cultural racism for White people generally as well as White Evangelicals, yet for Black Protestants, belief in an angry God is associated with resistance against this type of cultural racism.


Despite empirical evidence demonstrating the persistence of implicit bias in policing and institutional racism within law enforcement, the public continues to be divided on how to interpret police treatment of Black persons. This study uncovers an association between religious narrative schema, such as image of God, and one's attitude toward this social issue as well as how complex religion at the intersection of race and religious affiliation may impact the direction of this association between an angry God image and police treatment of Black persons. Our findings confirm that an angry God image is modestly associated with the narrative that police shoot Blacks more than Whites because Blacks are more violent than Whites. Even when controlling for other religious, political, and demographic factors, the association holds. While angry God is not the only factor or the most influential, our results suggests that it does work as a distinct factor in this understanding of police treatment of Black persons. Previous research supports this finding since the narrative that police shoot Blacks more because Blacks are more violent than Whites is based on punitive ideology. But whose version of the story is this telling?

Due to large White samples in most survey research, we contend that previous research has undertheorized the role that race plays in the association between angry God and punitive attitudes, and as a result, this research has likely inadvertently privileged a White narrative of angry God. Using the insights of critical quantitative methodology and intersectionality, the inclusion of interactions of angry God image with racialized identity as well as racialized religious traditions creates space for the telling of counternarratives regarding angry God image and the view that police shoot Blacks more than Whites because Blacks are more violent than Whites. The first interaction introduced assesses if racialized identity moderates the angry God effect. Although the interaction term for racialized identity and angry God is not significant, the predicted probabilities and average marginal effects elucidate a trend worth noting. While angry God image has no effect for Black respondents, it has a notable positive trend for White respondents, and this difference is pronounced on the higher half of the angry God scale. This supports our claim that past research has treated angry God image as a colorblind concept, yet this positive association between angry God and punitive criminal justice is raced, specifically raced White.

Here is a summary:

The article explores the relationship between image of God (IoG) and cultural racism in relation to police treatment of African Americans. The authors argue that IoG can be a source of cultural racism, which is a form of racism that is embedded in the culture of a society. They suggest that people who hold an angry IoG are more likely to believe that African Americans are dangerous and violent, and that this belief can lead to discriminatory treatment by police.

Here are some of the key points from the article:
  • Image of God (IoG) can be a source of cultural racism.
  • People who hold an angry IoG are more likely to believe that African Americans are dangerous and violent.
  • This belief can lead to discriminatory treatment by police.
  • Interventions that address IoG could be an effective way to reduce racism and discrimination.

Friday, July 28, 2023

Humans, Neanderthals, robots and rights

Mamak, K.
Ethics Inf Technol 24, 33 (2022).


Robots are becoming more visible parts of our life, a situation which prompts questions about their place in our society. One group of issues that is widely discussed is connected with robots’ moral and legal status as well as their potential rights. The question of granting robots rights is polarizing. Some positions accept the possibility of granting them human rights whereas others reject the notion that robots can be considered potential rights holders. In this paper, I claim that robots will never have all human rights, even if we accept that they are morally equal to humans. I focus on the role of embodiment in the content of the law. I claim that even relatively small differences in the ontologies of entities could lead to the need to create new sets of rights. I use the example of Neanderthals to illustrate that entities similar to us might have required different legal statuses. Then, I discuss the potential legal status of human-like robots.


The place of robots in the law universe depends on many things. One is our decision about their moral status, but even if we accept that some robots are equal to humans, this does not mean that they have the same legal status as humans. Law, as a human product, is tailored to a human being who has a body. Embodiment impacts the content of law, and entities with different ontologies are not suited to human law. As discussed here, Neanderthals, who are very close to us from a biological point of view, and human-like robots cannot be counted as humans by law. Doing so would be anthropocentric and harmful to such entities because it could ignore aspects of their lives that are important for them. It is certain that the current law is not ready for human-like robots.

Here is a summary: 

In terms of robot rights, one factor to consider is the nature of robots. Robots are becoming increasingly sophisticated, and some experts believe that they may eventually become as intelligent as humans. If this is the case, then it is possible that robots could deserve the same rights as humans.

Another factor to consider is the relationship between humans and robots. Humans have a long history of using animals, and some people argue that robots are simply another form of animal. If this is the case, then it is possible that robots do not deserve the same rights as humans.
  • The question of robot rights is a complex one, and there is no easy answer.
  • The nature of robots and the relationship between humans and robots are two important factors to consider when thinking about robot rights.
  • It is important to start thinking about robot rights now, before robots become too sophisticated.

Thursday, July 27, 2023

Supervisees’ Perspectives of Inadequate, Harmful, and Exceptional Clinical Supervision: Are We Listening?

Hutman, H., Ellis, M. V., et al (2023).
The Counseling Psychologist, 001100002311725.


Supervisees’ experiences in supervision vary remarkably. To capture such variability, Ellis and colleagues offered a framework for understanding and assessing inadequate, harmful, and exceptional supervision. Although their framework was supported, it did not offer a nuanced understanding of these supervision experiences. Using consensual qualitative research–modified, this study sought to obtain a rich description of inadequate, harmful, and exceptional supervision. Participants (N = 135) were presented with definitions and provided responses (n = 156) to open-ended questions describing their inadequate (n = 63), harmful (n = 30), and/or exceptional (n = 63) supervision experiences. Supervisees reporting harmful experiences described supervisors as neglectful and callous, whereas inadequate supervision reflected inappropriate feedback, unavailability, and unresponsiveness. Conversely, exceptional supervision involved safety, clinical paradigm shifts, and modeling specific techniques or theories. Implications for supervision research, theory, and practice are discussed.

 Significance of the Scholarship to the Public

We identified themes from trainees’ descriptions of their inadequate, harmful, and exceptional experiences in clinical supervision. The findings offer a nuanced understanding of supervisees’ lived experiences,
illustrating how clinical supervisors went awry or went above and beyond, and suggesting strategies for promoting exceptional supervision and preventing harmful and inadequate supervision.

Here is a summary

Background: Clinical supervision is a critical component of professional development for mental health professionals. However, not all supervision is created equal. Some supervision can be inadequate, harmful, or exceptional.

Research question: The authors of this article investigated supervisees' perspectives of inadequate, harmful, and exceptional clinical supervision.

Methods: The authors conducted a qualitative study with 135 supervisees. They asked supervisees to describe their experiences of inadequate, harmful, and exceptional supervision.

Results: The authors found that supervisees' experiences of inadequate, harmful, and exceptional supervision varied widely. However, there were some common themes that emerged. For example, supervisees who experienced inadequate supervision often felt unsupported, neglected, and judged. Supervisees who experienced harmful supervision often felt traumatized, humiliated, and disempowered. Supervisees who experienced exceptional supervision often felt supported, challenged, and empowered.

Conclusions: The authors concluded that supervisees' experiences of clinical supervision can have a profound impact on their professional development. They suggest that we need to listen to supervisees' experiences of supervision and to take steps to ensure that all supervisees receive high-quality supervision.

Wednesday, July 26, 2023

Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions

Krügel, S., Ostermaier, A. & Uhl, M.
Philos. Technol. 35, 17 (2022).


Departing from the claim that AI needs to be trustworthy, we find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data and when they learn information about it that warrants distrust. We conducted online experiments where the subjects took the role of decision-makers who received advice from an algorithm on how to deal with an ethical dilemma. We manipulated the information about the algorithm and studied its influence. Our findings suggest that AI is overtrusted rather than distrusted. We suggest digital literacy as a potential remedy to ensure the responsible use of AI.


Background: Artificial intelligence (AI) is increasingly being used to make ethical decisions. However, there is a concern that AI-powered advisors may not be trustworthy, due to factors such as bias and opacity.

Research question: The authors of this article investigated whether humans trust AI-powered advisors for ethical decisions, even when they know that the advisor is untrustworthy.

Methods: The authors conducted a series of experiments in which participants were asked to make ethical decisions with the help of an AI advisor. The advisor was either trustworthy or untrustworthy, and the participants were aware of this.

Results: The authors found that participants were more likely to trust the AI advisor, even when they knew that it was untrustworthy. This was especially true when the advisor was able to provide a convincing justification for its advice.

Conclusions: The authors concluded that humans are susceptible to "zombie trust" in AI-powered advisors. This means that we may trust AI advisors even when we know that they are untrustworthy. This is a concerning finding, as it could lead us to make bad decisions based on the advice of untrustworthy AI advisors.  By contrast, decision-makers do disregard advice from a human convicted criminal.

The article also discusses the implications of these findings for the development and use of AI-powered advisors. The authors suggest that it is important to make AI advisors more transparent and accountable, in order to reduce the risk of zombie trust. They also suggest that we need to educate people about the potential for AI advisors to be untrustworthy.

Tuesday, July 25, 2023

Inside the DeSantis Doc That Showtime Didn’t Want You to See

Roger Sollenberger
The Daily Beast
Originally posted 23 July 23

Here are two excerpts:

The documentary contrasts DeSantis’ account with those of two anonymous ex-prisoners, whom the transcript indicated were not represented in the flesh; their claims were delivered in “voice notes.”

“Officer DeSantis was one of the officers who oversaw the force-feeding and torture we were subjected to in 2006,” one former prisoner said. The second former detainee claimed that DeSantis was “one of the officers who mistreated us,” adding that DeSantis was “a bad person” and “a very bad officer.”

Over a view of “Camp X-Ray”—the now-abandoned section of Gitmo where DeSantis was stationed but has since fallen into disrepair—the narrator revealed that a VICE freedom of information request for the Florida governor’s active duty record returned “little about Guantanamo” outside of his arrival in March 2006.

But as the documentary noted, that period was “a brutal point in the prison’s history.”

Detainees had been on a prolonged hunger strike to call attention to their treatment, and the government’s solution was to force-feed prisoners Ensure dietary supplements through tubes placed in their noses. Detainees alleged the process caused excessive bleeding and was repeated “until they vomited and defecated on themselves.” (DeSantis, a legal adviser, would almost certainly have been aware that the UN concluded that force-feeding amounted to torture the month before he started working at Guantanamo.)


The transcript then presented DeSantis’ own 2018 account of his role in the forced-feedings, when he told CBS News Miami that he had personally and professionally endorsed force-feeding as a legal way to break prisoner hunger strikes.

“The commander wants to know, well how do I combat this? So one of the jobs as a legal adviser will be like, ‘Hey, you actually can force feed, here’s what you can do, here’s kinda the rules of that,’” DeSantis said at the time.

DeSantis altered that language in a Piers Morgan interview this March, again invoking his junior rank as evidence that he would have lacked standing to order forced-feeding.

“There may have been a commander that would have done feeding if someone was going to die, but that was not something that I would have even had authority to do,” he said. However, DeSantis did not deny that he had provided that legal advice.

My thoughts:

I would like the see the documentary and make my own decision about its veracity.
  • The decision by Showtime to pull the episode is a significant one, as it suggests that the network is willing to censor its programming in order to avoid political controversy.
  • This is a worrying development, as it raises questions about the future of independent journalism in the United States.
  • If news organizations are afraid to air stories that are critical of powerful figures, then it will be much more difficult for the public to hold those figures accountable.
  • I hope that Showtime will reconsider its decision and allow the episode to air. The public has a right to know about the allegations against DeSantis, and it is important that these allegations be given a fair hearing.

Monday, July 24, 2023

How AI can distort human beliefs

Kidd, C., & Birhane, A. (2023, June 23).
Science, 380(6651), 1222-1223.
doi:10.1126/science. adi0248

Here is an excerpt:

Three core tenets of human psychology can help build a bridge of understanding about what is at stake when discussing regulation and policy options. These ideas in psychology can connect to machine learning but also those in political science, education, communication, and the other fields that are considering the impact of bias and misinformation on population-level beliefs.

People form stronger, longer-lasting beliefs when they receive information from agents that they judge to be confident and knowledgeable, starting in early childhood. For example, children learned better when they learned from an agent who asserted their knowledgeability in the domain as compared with one who did not (5). That very young children track agents’ knowledgeability and use it to inform their beliefs and exploratory behavior supports the theory that this ability reflects an evolved capacity central to our species’ knowledge development.

Although humans sometimes communicate false or biased information, the rate of human errors would be an inappropriate baseline for judging AI because of fundamental differences in the types of exchanges between generative AI and people versus people and people. For example, people regularly communicate uncertainty through phrases such as “I think,” response delays, corrections, and speech disfluencies. By contrast, generative models unilaterally generate confident, fluent responses with no uncertainty representations nor the ability to communicate their absence. This lack of uncertainty signals in generative models could cause greater distortion compared with human inputs.

Further, people assign agency and intentionality readily. In a classic study, people read intentionality into the movements of simple animated geometric shapes (6). Likewise, people commonly read intentionality— and humanlike intelligence or emergent sentience—into generative models even though these attributes are unsubstantiated (7). This readiness to perceive generative models as knowledgeable, intentional agents implies a readiness to adopt the information that they provide more rapidly and with greater certainty. This tendency may be further strengthened because models support multimodal interactions that allow users to ask models to perform actions like “see,” “draw,” and “speak” that are associated with intentional agents. The potential influence of models’ problematic outputs on human beliefs thus exceeds what is typically observed for the influence of other forms of algorithmic content suggestion such as search. These issues are exacerbated by financial and liability interests incentivizing companies to anthropomorphize generative models as intelligent, sentient, empathetic, or even childlike.

Here is a summary of solutions that can be used to address the problem of AI-induced belief distortion. These solutions include:

Transparency: AI models should be transparent about their biases and limitations. This will help people to understand the limitations of AI models and to be more critical of the information that they generate.

Education: People should be educated about the potential for AI models to distort beliefs. This will help people to be more aware of the risks of using AI models and to be more critical of the information that they generate.

Regulation: Governments could regulate the use of AI models to ensure that they are not used to spread misinformation or to reinforce existing biases.

Sunday, July 23, 2023

How to Use AI Ethically for Ethical Decision-Making

Demaree-Cotton, J., Earp, B. D., & Savulescu, J.
(2022). American Journal of Bioethics, 22(7), 1–3.

Here is an excerpt:

The  kind  of AI  proposed  by  Meier  and  colleagues  (2022) has  the  fascinating  potential to improve the transparency of ethical decision-making, at least if it is used as a decision aid rather than a decision replacement (Savulescu & Maslen 2015). While artificial intelligence cannot itself engage in the human communicative process of justifying its decisions to patients, the AI they describe (unlike “black-box” AI) makes explicit which values and principles are involved and how much weight they are given.

By contrast, the moral principles or values underlying human moral intuition are not always consciously, introspectively accessible (Cushman, Young, and  Hauser  2006).  While humans sometimes have a fuzzy, intuitive sense of some of the factors that are relevant to their moral judgment, we often have strong moral intuitions without being sure of their source,  or  with- out being clear on precisely how strongly different  factors  played  a  role in  generating  the intuitions.  But  if  clinicians  make use  of  the AI  as a  decision  aid, this  could help  them  to transparently and precisely communicate the actual reasons behind their decision.

This is so even if the AI’s recommendation is ultimately rejected. Suppose, for example, that the AI recommends a course of action, with a certain amount of confidence, and it specifies the exact values or  weights it has assigned  to  autonomy versus  beneficence  in coming  to  this conclusion. Evaluating the recommendation made by the AI could help a committee make more explicit the “black box” aspects  of their own reasoning.  For example, the committee might decide that beneficence should actually be  weighted more heavily in this case than the AI suggests. Being able to understand the reason that their decision diverges from that of the AI gives them the opportunity to offer a further justifying reason as to why they think beneficence should be given more weight;  and  this,  in  turn, could improve the  transparency of their recommendation. 

However, the potential for the kind of AI described in the target article to improve the accuracy of moral decision-making may be more limited. This is so for two reasons. Firstly, whether AI can be expected to outperform human decision-making depends in part on the metrics used to train it. In non-ethical domains, superior accuracy can be achieved because the “verdicts” given to the AI in the training phase are not solely the human judgments that the AI is intended to replace or inform. Consider how AI can learn to detect lung cancer from scans at a superior rate to human radiologists after being trained on large datasets and being “told” which scans show cancer and which ones are cancer-free. Importantly, this training includes cases where radiologists  did  not  recognize  cancer  in  the  early  scans  themselves,  but  where  further information verified the correct diagnosis later on (Ardila et al. 2019). Consequently, these AIs are  able  to  detect  patterns  even in  early  scans  that  are  not  apparent  or  easily detectable  by human radiologists, leading to superior accuracy compared to human performance.  

Saturday, July 22, 2023

Generative AI companies must publish transparency reports

A. Narayanan and S. Kapoor
Knight First Amendment Institute
Originally published 26 June 23

Here is an excerpt:

Transparency reports must cover all three types of harms from AI-generated content

There are three main types of harms that may result from model outputs.

First, generative AI tools could be used to harm others, such as by creating non-consensual deepfakes or child sexual exploitation materials. Developers do have policies that prohibit such uses. For example, OpenAI's policies prohibit a long list of uses, including the use of its models to generate unauthorized legal, financial, or medical advice for others. But these policies cannot have real-world impact if they are not enforced, and due to platforms' lack of transparency about enforcement, we have no idea if they are effective. Similar challenges in ensuring platform accountability have also plagued social media in the past; for instance, ProPublica reporters repeatedly found that Facebook failed to fully remove discriminatory ads from its platform despite claiming to have done so.

Sophisticated bad actors might use open-source tools to generate content that harms others, so enforcing use policies can never be a comprehensive solution. In a recent essay, we argued that disinformation is best addressed by focusing on its distribution (e.g., on social media) rather than its generation. Still, some actors will use tools hosted in the cloud either due to convenience or because the most capable models don’t tend to be open-source. For these reasons, transparency is important for cloud-based generative AI.

Second, users may over-rely on AI for factual information, such as legal, financial, or medical advice. Sometimes they are simply unaware of the tendency of current chatbots to frequently generate incorrect information. For example, a user might ask "what are the divorce laws in my state?" and not know that the answer is unreliable. Alternatively, the user might be harmed because they weren’t careful enough to verify the generated information, despite knowing that it might be inaccurate. Research on automation bias shows that people tend to over-rely on automated tools in many scenarios, sometimes making more errors than when not using the tool.

ChatGPT includes a disclaimer that it sometimes generates inaccurate information. But OpenAI has often touted its performance on medical and legal exams. And importantly, the tool is often genuinely useful at medical diagnosis or legal guidance. So, regardless of whether it’s a good idea to do so, people are in fact using it for these purposes. That makes harm reduction important, and transparency is an important first step.

Third, generated content could be intrinsically undesirable. Unlike the previous types, here the harms arise not because of users' malice, carelessness, or lack of awareness of limitations. Rather, intrinsically problematic content is generated even though it wasn’t requested. For example, Lensa's avatar creation app generated sexualized images and nudes when women uploaded their selfies. Defamation is also intrinsically harmful rather than a matter of user responsibility. It is no comfort to the target of defamation to say that the problem would be solved if every user who might encounter a false claim about them were to exercise care to verify it.

Quick summary: 

The call for transparency reports aims to increase accountability and understanding of the inner workings of generative AI models. By disclosing information about the data used to train the models, the companies can address concerns regarding potential biases and ensure the ethical use of their technology.

Transparency reports could include details about the sources and types of data used, the demographics represented in the training data, any data augmentation techniques applied, and potential biases detected or addressed during model development. This information would enable users, policymakers, and researchers to evaluate the capabilities and limitations of the generative AI systems.

Friday, July 21, 2023

Belief in Five Spiritual Entities Edges Down to New Lows

Megan Brenan
Originally posted 20 July 23

The percentages of Americans who believe in each of five religious entities -- God, angels, heaven, hell and the devil -- have edged downward by three to five percentage points since 2016. Still, majorities believe in each, ranging from a high of 74% believing in God to lows of 59% for hell and 58% for the devil. About two-thirds each believe in angels (69%) and heaven (67%).

Gallup has used this framework to measure belief in these spiritual entities five times since 2001, and the May 1-24, 2023, poll finds that each is at its lowest point. Compared with 2001, belief in God and heaven is down the most (16 points each), while belief in hell has fallen 12 points, and the devil and angels are down 10 points each.

This question asks respondents whether they believe in each concept or if they are unsure, and from 13% to 15% currently say they are not sure. At the same time, nearly three in 10 U.S. adults do not believe in the devil or hell, while almost two in 10 do not believe in angels and heaven, and 12% say they do not believe in God.

As the percentage of believers has dropped over the past two decades, the corresponding increases have occurred mostly in nonbelief, with much smaller increases in uncertainty. This is true for all but belief in God, which has seen nearly equal increases in uncertainty and nonbelief.

In the current poll, about half of Americans, 51%, believe in all five spiritual entities, while 11% do not believe in any of them. Another 7% are not sure about all of them, while the rest (31%) believe in some and not others.

Gallup periodically measures Americans’ belief in God with different question wordings, producing slightly different results. While the majority of U.S. adults say they believe in God regardless of the question wording, when not offered the option to say they are unsure, significantly more (81% in a survey conducted last year) said they believe in God.

My take: Despite the decline in belief, majorities of Americans still believe in each of the five spiritual entities. This suggests that religion remains an important part of American culture, even as the country becomes more secularized.

Thursday, July 20, 2023

Big tech is bad. Big A.I. will be worse.

Daron Acemoglu and Simon Johnson
The New York Times
Originally posted 15 June 23

Here is an excerpt:

Today, those countervailing forces either don’t exist or are greatly weakened. Generative A.I. requires even deeper pockets than textile factories and steel mills. As a result, most of its obvious opportunities have already fallen into the hands of Microsoft, with its market capitalization of $2.4 trillion, and Alphabet, worth $1.6 trillion.

At the same time, powers like trade unions have been weakened by 40 years of deregulation ideology (Ronald Reagan, Margaret Thatcher, two Bushes and even Bill Clinton). For the same reason, the U.S. government’s ability to regulate anything larger than a kitten has withered. Extreme polarization, fear of killing the golden (donor) goose or undermining national security means that most members of Congress would still rather look away.

To prevent data monopolies from ruining our lives, we need to mobilize effective countervailing power — and fast.

Congress needs to assert individual ownership rights over underlying data that is relied on to build A.I. systems. If Big A.I. wants to use our data, we want something in return to address problems that communities define and to raise the true productivity of workers. Rather than machine intelligence, what we need is “machine usefulness,” which emphasizes the ability of computers to augment human capabilities. This would be a much more fruitful direction for increasing productivity. By empowering workers and reinforcing human decision making in the production process, it also would strengthen social forces that can stand up to big tech companies. It would also require a greater diversity of approaches to new technology, thus making another dent in the monopoly of Big A.I.

We also need regulation that protects privacy and pushes back against surveillance capitalism, or the pervasive use of technology to monitor what we do — including whether we are in compliance with “acceptable” behavior, as defined by employers and how the police interpret the law, and which can now be assessed in real time by A.I. There is a real danger that A.I. will be used to manipulate our choices and distort lives.

Finally, we need a graduated system for corporate taxes, so that tax rates are higher for companies when they make more profit in dollar terms. Such a tax system would put shareholder pressure on tech titans to break themselves up, thus lowering their effective tax rate. More competition would help by creating a diversity of ideas and more opportunities to develop a pro-human direction for digital technologies.

The article argues that big tech companies, such as Google, Amazon, and Facebook, have already accumulated too much power and control. I concur that if these companies are allowed to continue their unchecked growth, they will eventually become too powerful and oppressive because of strength of AI compared to the limited thinking and reasoning of human beings.

Wednesday, July 19, 2023

Accuracy and social motivations shape judgements of (mis)information

Rathje, S., Roozenbeek, J., Van Bavel, J.J. et al.
Nat Hum Behav 7, 892–903 (2023).


The extent to which belief in (mis)information reflects lack of knowledge versus a lack of motivation to be accurate is unclear. Here, across four experiments (n = 3,364), we motivated US participants to be accurate by providing financial incentives for correct responses about the veracity of true and false political news headlines. Financial incentives improved accuracy and reduced partisan bias in judgements of headlines by about 30%, primarily by increasing the perceived accuracy of true news from the opposing party (d = 0.47). Incentivizing people to identify news that would be liked by their political allies, however, decreased accuracy. Replicating prior work, conservatives were less accurate at discerning true from false headlines than liberals, yet incentives closed the gap in accuracy between conservatives and liberals by 52%. A non-financial accuracy motivation intervention was also effective, suggesting that motivation-based interventions are scalable. Altogether, these results suggest that a substantial portion of people’s judgements of the accuracy of news reflects motivational factors.


There is a sizeable partisan divide in the kind of news liberals and conservatives believe in, and conservatives tend to believe in and share more false news than liberals. Our research suggests these differences are not immutable. Motivating people to be accurate improves accuracy about the veracity of true (but not false) news headlines, reduces partisan bias and closes a substantial portion of the gap in accuracy between liberals and conservatives. Theoretically, these results identify accuracy and social motivations as key factors in driving news belief and sharing. Practically, these results suggest that shifting motivations may be a useful strategy for creating a shared reality across the political spectrum.

Key findings
  • Accuracy motivations: Participants who were motivated to be accurate were more likely to correctly identify true and false news headlines.
  • Social motivations: Participants who were motivated to identify news that would be liked by their political allies were less likely to correctly identify true and false news headlines.
  • Combination of motivations: Participants who were motivated by both accuracy and social motivations were more likely to correctly identify true news headlines from the opposing political party.

Tuesday, July 18, 2023

How AI is learning to read the human mind

Nicola Smith
The Telegraph
Originally posted 23 May 2023

Here is an excerpt:

‘Brain rights’

But he warned that it could also be weaponised and used for military applications or for nefarious purposes to extract information from people.

“We are on the brink of a crisis from the point of view of mental privacy,” he said. “Humans are defined by their thoughts and their mental processes and if you can access them then that should be the sanctuary.”

Prof Yuste has become so concerned about the ethical implications of advanced neurotechnology that he co-founded the NeuroRights Foundation to promote “brain rights” as a new form of human rights.

The group advocates for safeguards to prevent the decoding of a person’s brain activity without consent, for protection of a person’s identity and free will, and for the right to fair access to mental augmentation technology.

They are currently working with the United Nations to study how human rights treaties can be brought up to speed with rapid progress in neurosciences, and raising awareness of the issues in national parliaments.

In August, the Human Rights Council in Geneva will debate whether the issues around mental privacy should be covered by the International Covenant on Civil and Political Rights, one of the most significant human rights treaties in the world.

The gravity of the task was comparable to the development of the atomic bomb, when scientists working on atomic energy warned the UN of the need for regulation and an international control system of nuclear material to prevent the risk of a catastrophic war, said Prof Yuste.

As a result, the International Atomic Energy Agency (IAEA) was created and is now based in Vienna.

Monday, July 17, 2023

Rethinking the Virtuous Circle Hypothesis on Social Media: Subjective versus Objective Knowledge and Political Participation

Lee, S., Diehl, T., & Valenzuela, S. (2021).
Human Communication Research, 48(1), 57–87.


Despite early promise, scholarship has shown little empirical evidence of learning from the news on social media. At the same time, scholars have documented the problem of information ‘snacking’ and information quality on these platforms. These parallel trends in the literature challenge long-held assumptions about the pro-social effects of news consumption and political participation. We argue that reliance on social media for news does not contribute to people’s real level of political knowledge (objective knowledge), but instead only influences people’s impression of being informed (subjective knowledge). Subjective knowledge is just as important for driving political participation, a potentially troubling trend given the nature of news consumption on social media. We test this expectation with panel survey data from the 2018 U.S. midterm elections. Two path model specifications (fixed effects and autoregressive) support our theoretical model. Implications for the study of the ‘dark side’ of social media and democracy are discussed.

Here is a summary.

The "virtuous circle hypothesis" states that news consumption leads to political knowledge, which leads to political participation. This hypothesis has been supported by previous research, but the authors of this paper argue that it may not hold true in the context of social media.

The authors argue that social media news consumption is often characterized by "information snacking" and "echo chambers," which can limit the amount of factual knowledge that people gain. Additionally, they argue that people's subjective sense of being informed can be just as important as their objective knowledge for driving political participation.

To test their hypothesis, the authors conducted a study of panel survey data from the 2018 U.S. midterm elections. They found that social media news consumption was positively associated with subjective knowledge, but not with objective knowledge. They also found that subjective knowledge was positively associated with political participation, even after controlling for objective knowledge.

The authors' findings suggest that the virtuous circle hypothesis may not hold true in the context of social media. They argue that social media news consumption can lead to a false sense of being informed, which can have a negative impact on political participation.

Here are some of the key takeaways from the research:
  • Social media news consumption may not lead to increased factual knowledge.
  • Subjective knowledge can be just as important as objective knowledge for driving political participation.
  • Social media news consumption can lead to a false sense of being informed.
The findings of this research have important implications for our understanding of the relationship between social media, news consumption, and political participation. They suggest that we need to be careful about how we use social media to consume news, and that we should be aware of the potential for social media to create a false sense of being informed.

Sunday, July 16, 2023

Gender-Affirming Care for Cisgender People

Theodore E. Schall and Jacob D. Moses
Hastings Center Report 53, no. 3 (2023): 15-24.
DOI: 10.1002/hast.1486 


Gender-affirming care is almost exclusively discussed in connection with transgender medicine. However, this article argues that such care predominates among cisgender patients, people whose gender identity matches their sex assigned at birth. To advance this argument, we trace historical shifts in transgender medicine since the 1950s to identify central components of "gender-affirming care" that distinguish it from previous therapeutic models, such as "sex reassignment." Next, we sketch two historical cases-reconstructive mammoplasty and testicular implants-to show how cisgender patients offered justifications grounded in authenticity and gender affirmation that closely mirror rationales supporting gender-affirming care for transgender people. The comparison exposes significant disparities in contemporary health policy regarding care for cis and trans patients. We consider two possible objections to the analogy we draw, but ultimately argue that these disparities are rooted in "trans exceptionalism" that produces demonstrable harm.

Here is my summary:

The authors cite several examples of gender-affirming care for cisgender people, such as breast reconstruction following mastectomy, penile implants following testicular cancer, hormone replacement therapy, and hair removal. They argue that these interventions can be just as important for cisgender people's mental and physical health as they are for transgender people.

The authors also note that gender-affirming care for cisgender people is often less scrutinized and less stigmatized than such care for transgender people. Cisgender people do not need special letters of permission from mental health providers to access care whose primary purpose is to affirm their gender identity. And insurance companies are less likely to exclude gender-affirming care for cisgender people from their coverage.

The authors argue that the differences in the conceptualization and treatment of gender-affirming care for cisgender and transgender people reflect broad anti-trans bias in society and health care. They call for a more inclusive view of gender-affirming care that recognizes the needs of all people, regardless of their gender identity.

Final thoughts:
  1. Gender-affirming care can be lifesaving. It can help reduce anxiety, depression, and suicidal thoughts.  Gender-affirming care can be framed as suicide prevention.
  2. Gender-affirming care is not experimental. It has been studied extensively and is safe and effective. See other posts on this site for more comprehensive examples.
  3. All people deserve access to gender-affirming care, regardless of their gender identity. This is basic equality and fairness in terms of access to medical care.

Saturday, July 15, 2023

Christ, Country, and Conspiracies? Christian Nationalism, Biblical Literalism, and Belief in Conspiracy Theories

Walker, B., & Vegter, A.
Journal for the Study of Religion
May 8, 2023.


When misinformation is rampant, “fake news” is rising, and conspiracy theories are widespread, social scientists have a vested interest in understanding who is most susceptible to these false narratives and why. Recent research suggests Christians are especially susceptible to belief in conspiracy theories in the United States, but scholars have yet to ascertain the role of religiopolitical identities and epistomological approaches, specifically Christian nationalism and biblical literalism, in generalized conspiracy thinking. Because Christian nationalists sense that the nation is under cultural threat and biblical literalism provides an alternative (often anti-elite) source of information, we predict that both will amplify conspiracy thinking. We find that Christian nationalism and biblical literalism independently predict conspiracy thinking, but that the effect of Christian nationalism increases with literalism. Our results point to the contingent effects of Christian nationalism and the need for the religious variables in understanding conspiracy thinking.


I could not find a free pdf.  Here is  summary.

The study's findings suggest that Christian nationalism and biblical literalism may be contributing factors to the rise of conspiracy theories in the United States. The study also suggests that efforts to address the problem of conspiracy theories may need to focus on addressing these underlying beliefs.

Here are some additional details from the study:
  • The study surveyed a nationally representative sample of U.S. adults.
  • The study found that 25% of Christian nationalists and 20% of biblical literalists believe in at least one conspiracy theory, compared to 12% of people who do not hold these beliefs.
  • The study found that the belief in conspiracy theories is amplified when people feel that their nation is under cultural threat. For example, Christian nationalists who believe that the nation is under cultural threat are more likely to believe that the government is hiding information about extraterrestrial life.

Friday, July 14, 2023

The illusion of moral decline

Mastroianni, A.M., Gilbert, D.T.
Nature (2023).


Anecdotal evidence indicates that people believe that morality is declining. In a series of studies using both archival and original data (n = 12,492,983), we show that people in at least 60 nations around the world believe that morality is declining, that they have believed this for at least 70 years and that they attribute this decline both to the decreasing morality of individuals as they age and to the decreasing morality of successive generations. Next, we show that people’s reports of the morality of their contemporaries have not declined over time, suggesting that the perception of moral decline is an illusion. Finally, we show how a simple mechanism based on two well-established psychological phenomena (biased exposure to information and biased memory for information) can produce an illusion of moral decline, and we report studies that confirm two of its predictions about the circumstances under which the perception of moral decline is attenuated, eliminated or reversed (that is, when respondents are asked about the morality of people they know well or people who lived before the respondent was born). Together, our studies show that the perception of moral decline is pervasive, perdurable, unfounded and easily produced. This illusion has implications for research on the misallocation of scarce resources, the underuse of social support and social influence.


Participants in the foregoing studies believed that morality has declined, and they believed this in every decade and in every nation we studied. They believed the decline began somewhere around the time they were born, regardless of when that was, and they believed it continues to this day. They believed the decline was a result both of individuals becoming less moral as they move through time and of the replacement of more moral people by less moral people. And they believed that the people they personally know and the people who lived before they did are exceptions to this rule. About all these things, they were almost certainly mistaken. One reason they may have held these mistaken beliefs is that they may typically have encountered more negative than positive information about the morality of contemporaries whom they did not personally know, and the negative information may have faded more quickly from memory or lost its emotional impact more quickly than the positive information did, leading them to believe that people today are not as kind, nice, honest or good as once upon a time they were.

Here are some important points:
  • There are a number of reasons why people might believe that morality is declining. One reason is that people tend to focus on negative news stories, which can give the impression that the world is a more dangerous and immoral place than it actually is. Another reason is that people tend to remember negative events more vividly than positive events, which can also lead to the impression that morality is declining.
  • Despite the widespread belief in moral decline, there is no evidence to suggest that morality is actually getting worse. In fact, there is evidence to suggest that morality has been improving over time. For example, crime rates have been declining for decades, and people are more likely to volunteer and donate to charity than they were in the past.
  • The illusion of moral decline can have a number of negative consequences. It can lead to cynicism, apathy, and a sense of hopelessness. It can also make it more difficult to solve social problems, because people may believe that the problem is too big or too complex to be solved.

Thursday, July 13, 2023

A time for moral actions: Moral identity, morality-as-cooperation and moral circles predict support of collective action to fight the COVID-19 pandemic in an international sample

Boggio, P. S., Nezlek, J. B., et al. (2022).
Journal of Personality and Social Psychology,
122(4), 937-956.


Understanding what factors are linked to public health behavior in a global pandemic is critical to mobilizing an effective public health response. Although public policy and health messages are often framed through the lens of individual benefit, many of the behavioral strategies needed to combat a pandemic require individual sacrifices to benefit the collective welfare. Therefore, we examined the relationship between individuals’ morality and their support for public health measures. In a large-scale study with samples from 68 countries worldwide (Study 1; N = 46,576), we found robust evidence that moral identity, morality-as-cooperation, and moral circles are each positively related to people’s willingness to engage in public health behaviors and policy support. Together, these moral dispositions accounted for 9.8%, 10.2%, and 6.2% of support for limiting contact, improving hygiene, and supporting policy change, respectively. These morality variables (Study 2) and Schwartz’s values dimensions (Study 3) were also associated with behavioral responses across 42 countries in the form of reduced physical mobility during the pandemic. These results suggest that morality may help mobilize citizens to support public health policy.


Here is a summary of this research.  I could not find a free pdf.

The COVID-19 pandemic has had a significant impact on the world, and it has required individuals to make sacrifices for the collective good. The authors of this study were interested in understanding how individuals' moral identities, their beliefs about morality, and their sense of moral community might influence their willingness to support collective action to fight the pandemic.

The authors conducted a study with a sample of over 46,000 people from 68 countries. They found that people who had a strong moral identity, who believed that morality is about cooperation, and who had a broad sense of moral community were more likely to support collective action to fight the pandemic. These findings suggest that individuals' moral identities and beliefs can play an important role in motivating them to take action to benefit the collective good.

The authors conclude that their findings have important implications for public health campaigns. They suggest that public health campaigns should focus on appealing to people's moral identities and beliefs in order to motivate them to take action to fight the pandemic.

Here are some of the key findings of the study:
  • People with a strong moral identity were more likely to support collective action to fight the pandemic.
  • People who believed that morality is about cooperation were more likely to support collective action to fight the pandemic.
  • People who had a broad sense of moral community were more likely to support collective action to fight the pandemic.
These findings suggest that individuals' moral identities and beliefs can play an important role in motivating them to take action to benefit the collective good.

Wednesday, July 12, 2023

Doing Good or Feeling Good? Justice Concerns Predict Online Shaming Via Deservingness and Schadenfreude

Barron, A., Woodyatt, L., et al. (2023).


Public shaming has moved from the village square and is now an established online phenomenon. The current paper explores whether online shaming is motivated by a person’s desire to do good (a justice motive); and/or, because it feels good (a hedonic motive), specifically, as a form of malicious pleasure at another’s misfortune (schadenfreude). We examine two key aspects of social media that may moderate these processes: anonymity (Study 1) and social norms (the responses of other users; Studies 2-3). Across three experiments (N = 225, 198, 202) participants were presented with a fabricated news article featuring an instance of Islamophobia and given the opportunity to respond. Participants’ concerns about social justice were not directly positively associated with online shaming and had few consistent indirect effects on shaming via moral outrage. Rather, justice concerns were primarily associated with shaming via participants’ perception that the offender was deserving of negative consequences, and their feelings of schadenfreude regarding these consequences. Anonymity did not moderate this process and there was mixed evidence for the qualifying effect of social norms. Overall, the current studies point to the hedonic motive in general and schadenfreude specifically as a key moral emotion associated with people’s shaming behaviour.


The results from three studies point to perceptions of deservingness and schadenfreude as important predictors of online shaming. Given the exploratory nature of the current work and the paucity of existing research on online shaming, many avenues exist for future research. Social psychology is well placed to understand both individual and group processes that may influence shaming behaviour – in particular, how certain features of the online environment and aspects of the transgressor may interact to influence the nature and severity of online shaming behaviour. As society continues to rely on social media to consume content and connect with others, we are hopeful that future research stimulates a more comprehensive understanding of the dynamics of online shaming and its consequences. 

Here are some additional key points from the article:
  • Online shaming is a form of social punishment that is increasingly common in the digital age.
  • There are two main motivations for online shaming: a desire to do good (a justice motive) and a desire to feel good (a hedonic motive).
  • The feeling of schadenfreude plays an important role in mediating the relationship between justice concerns and online shaming.

Tuesday, July 11, 2023

Conspirituality: How New Age conspiracy theories threaten public health

D. Beres, M. Remski, & J. Walker
Originally posted 17 June 23

Here is an excerpt:

Disaster capitalism and disaster spirituality rely, respectively, on an endless supply of items to commodify and minds to recruit. While both roar into high gear in times of widespread precarity and vulnerability, in disaster spirituality there is arguably more at stake on the supply side. Hedge fund managers can buy up distressed properties in post-Katrina New Orleans to gentrify and flip. They have cash on hand to pull from when opportunity strikes, whereas most spiritual figures have to use other means for acquisitions and recruitment during times of distress.

Most of the influencers operating in today’s conspirituality landscape stand outside of mainstream economies and institutional support. They’ve been developing fringe religious ideas and making money however they can, usually up against high customer turnover.

For the mega-rich disaster capitalist, a hurricane or civil war is a windfall. But for the skint disaster spiritualist, a public catastrophe like 9/11 or COVID-19 is a life raft. Many have no choice but to climb aboard and ride. Additionally, if your spiritual group has been claiming for years to have the answers to life’s most desperate problems, the disaster is an irresistible dare, a chance to make good on divine promises. If the spiritual group has been selling health ideologies or products they guarantee will ensure perfect health, how can they turn away from the opportunity presented by a pandemic?

Here is my summary with some extras:

The article argues that conspirituality is a growing problem that is threatening public health. Conspiritualists push the false beliefs that vaccines are harmful, that the COVID-19 pandemic is a hoax, and that natural immunity is the best way to protect oneself from disease. These beliefs can lead people to make decisions that put their health and the health of others at risk.

The article also argues that conspirituality is often spread through social media platforms, which can make it difficult to verify the accuracy of information. This can lead people to believe false or misleading information, which can have serious consequences for their health.  However, some individuals can make a profit from the spread of disinformation.

The article concludes by calling for more research on conspirituality and its impact on public health. It also calls for public health professionals to be more aware of conspirituality and to develop strategies to address it.
  • Conspirituality is a term that combines "conspiracy" and "spirituality." It refers to the belief that certain anti-science ideas (such as alternative medicine, non-scientific interventions, and spiritual healing) are being suppressed by a powerful elite. Conspiritualists often believe that this elite is responsible for a wide range of problems, including the COVID-19 pandemic.
  • The term "conspirituality" was coined by sociologists Charlotte Ward and David Voas in 2011. They argued that conspirituality is a unique form of conspiracy theory that is characterized by blending 1) New Age beliefs (religious and spiritual ideas) of a paradigm shift in consciousness (in which we will all be awakened to a new reality); and, 2) traditional conspiracy theories (in which an elite, powerful, and covert group of individuals are either controlling or trying to control the social and political order.)

Monday, July 10, 2023

Santa Monica’s Headspace Health laid off dozens of therapists. Their patients don’t know where they went

Jaimie Ding
The Los Angeles Times
Originally posted 7 July 23

When Headspace Health laid off 33 of its therapists June 29, patients were told their providers had left the platform.

What they didn’t know was their therapists had lost their jobs. And they suddenly had no way to contact them.

Several therapists who were let go from Headspace, the Santa Monica meditation app and remote mental health care company, have raised alarm over their treatment and that of their patients after the companywide layoff of 181 total employees, which amounts to 15% of the workforce.

After the layoffs were announced in the morning without warning, these therapists said they immediately lost access to their patient care systems. Appointments, they said, were canceled without explanation, potentially causing irreparable harm to their patients and forcing them to violate the ethical guidelines of their profession.

One former therapist, who specializes in working with the LGBTQ+ community, said one of his clients had just come out in a session the day before he lost his job. The therapist requested anonymity because he was still awaiting severance from Headspace and feared retribution.

“I’m the first person they’ve ever talked to about it,” he said. “They’re never going back to therapy. They just had the first person she talked to about it abandon them.”

He didn’t know he had been laid off until 10 minutes after his first appointment was supposed to start and he had been unable to log into the system.

Some thoughts and analysis from me.  There are clear ethical and legal concerns here.

Abandoning patients: Headspace Health did not provide patients with any notice or information about where their therapists had gone. This is a violation of the ethical principle of fidelity, which requires healthcare providers to act in the best interests of their patients. It also leaves patients feeling abandoned and without a source of care.

Potential for harm to patients: The sudden loss of a therapist can be disruptive and stressful for patients, especially those who are in the middle of treatment. This could lead to relapses, increased anxiety, or other negative consequences. In more extreme, but realistic cases, it could even lead to suicide.

In addition to the ethical and legal problems outlined above, the article also raises questions about the quality of care that patients can expect from Headspace Health. If the company is willing to abruptly lay off therapists without providing any notice or information to patients, it raises concerns about how they value the well-being of their patients. It also raises questions about the company's commitment to providing quality care.  Headspace may believe itself to be a tech company, but it is a healthcare company subject to many rules, regulations, and standards.

Sunday, July 9, 2023

Perceptions of Harm and Benefit Predict Judgments of Cultural Appropriation

Mosley, A. J., Heiphetz, L., et al. (2023).
Social Psychological and Personality Science, 


What factors underlie judgments of cultural appropriation? In two studies, participants read 157 scenarios involving actors using cultural products or elements of racial/ethnic groups to which they did not belong. Participants evaluated scenarios on seven dimensions (perceived cultural appropriation, harm to the community from which the cultural object originated, racism, profit to actors, extent to which cultural objects represent a source of pride for source communities, benefits to actors, and celebration), while the type of cultural object and the out-group associated with the object being appropriated varied. Using both the scenario and the participant as the units of analysis, perceived cultural appropriation was most strongly associated with perceived greater harm to the source community. We discuss broader implications for integrating research on inequality and moral psychology. Findings also have translational implications for educators and activists interested in increasing awareness about cultural appropriation.

General Discussion

People disagree about what constitutes cultural appropriation (Garcia Navaro, 2021). Prior research has indicated that prototypical cases of cultural appropriation include dominant-group members (e.g., White people) using cultural products stemming from subordinated groups (e.g., Black people; Katzarska-Miller et al., 2020; Mosley & Biernat, 2020). Minority group members’ use of dominant-group cultural products (termed “cultural dominance” by Rogers, 2006) is less likely to receive that label. However, even in prototypical cases, considerable variability in perceptions exists across actions (Mosley & Biernat, 2020). Furthermore, some perceivers—especially highly racially identified White Americans—view Black actors’ use of White cultural products as equally or more appropriative than White actors’ use of Black cultural products (Mosley et al., 2022).

These studies build on extant work by examining how features of out-group cultural use might contribute to construals of appropriation. We created a large set of scenarios, extending beyond the case of White–Black relations to include a greater diversity of racial groups (Native American, Hispanic, and Asian cultures). In all three studies, scenario-level analyses indicated that actions perceived to cause harm to the source community were also likely to be seen as appropriative, and those actions perceived to bring benefits to actors were less likely to be seen as appropriative. The strong connection between perceived source community harm and judgments of cultural appropriation corroborates research on the importance of harm to morally relevant judgments (Gray et al., 2014; Rozin & Royzman, 2001). At the same time, scenarios perceived to benefit actors—at least among the particular set of scenarios used here—were those that elicited a lower appropriation essence. However, at the level of individual perceivers, actor benefit (along with actor profit and some other measures) positively predicted appropriation perceptions. Perceiving benefit to an actor may contribute to a sense that the action is problematic to the source community (i.e., appropriative). Our findings are akin to findings on smoking and life expectancy: At the aggregate level, countries with higher rates of cigarette consumption have longer population life expectancies, but at the individual level, the more one smokes, the lower their life expectancy (Krause & Saunders, 2010). Scenarios that bring more benefit to actors are judged less appropriative, but individuals who see actor benefit in scenarios view them as more appropriative.

In all studies, participants perceived actions as more appropriative when White actors engaged with cultural products from Black communities, rather than the reverse pattern. This provides further evidence that the prototypical perpetrator of cultural appropriation is a high-status group member (Mosley & Biernat, 2020), where high-status actors have greater power and resources to exploit, marginalize, and cause harm to low-status source communities (Rogers, 2006).

Perhaps surprisingly, perceived appropriation and perceived celebration were positively correlated. Appropriation and celebration might be conceptualized as alternative, opposing construals of the same event. But this positive correlation may attest to the ambiguity, subjectivity, and disagreement about perceiving cultural appropriation: The same action may be construed as appropriative and (not or) celebratory. However, these construals were nonetheless distinct: Appropriation was positively correlated with perceived racism and harm, but celebration was negatively correlated with these factors.