Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, July 31, 2023

Top Arkansas psychiatrist accused of falsely imprisoning patients and Medicaid fraud

Laura Strickler & Stephanie Gosk
NBCnews.com
Originally posted July 23, 2023

Here is an excerpt:

The man who led the unit at the time, Dr. Brian Hyatt, was one of the most prominent psychiatrists in Arkansas and the chairman of the board that disciplines physicians. But he’s now under investigation by state and federal authorities who are probing allegations ranging from Medicaid fraud to false imprisonment.

VanWhy’s release marked the second time in two months that a patient was released from Hyatt’s unit only after a sheriff’s deputy showed up with a court order, according to court records.

“I think that they were running a scheme to hold people as long as possible, to bill their insurance as long as possible before kicking them out the door, and then filling the bed with someone else,” said Aaron Cash, a lawyer who represents VanWhy.

VanWhy and at least 25 other former patients have sued Hyatt, alleging that they were held against their will in his unit for days and sometimes weeks. And Arkansas Attorney General Tim Griffin’s office has accused Hyatt of running an insurance scam, claiming to treat patients he rarely saw and then billing Medicaid at “the highest severity code on every patient,” according to a search warrant affidavit.

As the lawsuits piled up, Hyatt remained chairman of the Arkansas State Medical Board. But he resigned from the board in late May after Drug Enforcement Administration agents executed a search warrant at his private practice. 

“I am not resigning because of any wrongdoing on my part but so that the Board may continue its important work without delay or distraction,” he wrote in a letter. “I will continue to defend myself in the proper forum against the false allegations being made against me.”

Northwest Medical Center in Springdale “abruptly terminated” Hyatt’s contract in May 2022, according to the attorney general’s search warrant affidavit. 

In April, the hospital agreed to pay $1.1 million in a settlement with the Arkansas Attorney General’s Office. Northwest Medical Center could not provide sufficient documentation that justified the hospitalization of 246 patients who were held in Hyatt’s unit, according to the attorney general’s office. 

As part of the settlement, the hospital denied any wrongdoing.

Sunday, July 30, 2023

Social influences in the digital era: When do people conform more to a human being or an artificial intelligence?

Riva, P., Aureli, N., & Silvestrini, F. 
(2022). Acta Psychologica, 229, 103681. 

Abstract

The spread of artificial intelligence (AI) technologies in ever-widening domains (e.g., virtual assistants) increases the chances of daily interactions between humans and AI. But can non-human agents influence human beings and perhaps even surpass the power of the influence of another human being? This research investigated whether people faced with different tasks (objective vs. subjective) could be more influenced by the information provided by another human being or an AI. We expected greater AI (vs. other humans) influence in objective tasks (i.e., based on a count and only one possible correct answer). By contrast, we expected greater human (vs. AI) influence in subjective tasks (based on attributing meaning to evocative images). In Study 1, participants (N = 156) completed a series of trials of an objective task to provide numerical estimates of the number of white dots pictured on black backgrounds. Results showed that participants conformed more with the AI's responses than the human ones. In Study 2, participants (N = 102) in a series of subjective tasks observed evocative images associated with two concepts ostensibly provided, again, by an AI or a human. Then, they rated how each concept described the images appropriately. Unlike the objective task, in the subjective one, participants conformed more with the human than the AI's responses. Overall, our findings show that under some circumstances, AI can influence people above and beyond the influence of other humans, offering new insights into social influence processes in the digital era.

Conclusion

Our research might offer new insights into social influence processes in the digital era. The results showed that people can conform more to non-human agents (than human ones) in a digital context under specific circumstances. For objective tasks eliciting uncertainty, people might be more prone to conform to AI agents than another human being, whereas for subjective tasks, other humans may continue to be the most credible source of influence compared with AI agents. These findings highlight the relevance of matching agents and the type of task to maximize social influence. Our findings could be important for non-human agent developers, showing under which circumstances a human is more prone to follow the guidance of non-human agents. Proposing a non-human agent in a task in which it is not so trusted could be suboptimal. Conversely, in objective-type tasks that elicit uncertainty, it might be advantageous to emphasize the nature of the agent as artificial intelligence, rather than trying to disguise the agent as human (as some existing chatbots tend to do). In conclusion, it is important to consider, on the one hand, that non-human agents can become credible sources of social influence and, on the other hand, the match between the type of agent and the type of task.

Summary:

The first study found that people conformed more to AI than to human sources on objective tasks, such as estimating the number of white dots on a black background. The second study found that people conformed more to human than to AI sources on subjective tasks, such as attributing meaning to evocative images.

The authors conclude that the findings of their studies suggest that AI can be a powerful source of social influence, especially on objective tasks. However, they also note that the literature on AI and social influence is still limited, and more research is needed to understand the conditions under which AI can be more or less influential than human sources.

Key points:
  • The spread of AI technologies is increasing the chances of daily interactions between humans and AI.
  • Research has shown that people can be influenced by AI on objective tasks, but they may be more influenced by humans on subjective tasks.
  • More research is needed to understand the conditions under which AI can be more or less influential than human sources.

Saturday, July 29, 2023

Racism in the Hands of an Angry God: How Image of God Impacts Cultural Racism in Relation to Police Treatment of African Americans

Lauve‐Moon, T. A., & Park, J. Z. (2023).
Journal for the Scientific Study of Religion.

Abstract

Previous research suggests an angry God image is a narrative schema predicting support for more punitive forms of criminal justice. However, this research has not explored the possibility that racialization may impact one's God image. We perform logistic regression on Wave V of the Baylor Religion Survey to examine the correlation between an angry God image and the belief that police shoot Blacks more often because Blacks are more violent than Whites (a context-specific form of cultural racism). Engaging critical insights from intersectionality theory, we also interact angry God image with both racialized identity and racialized religious tradition. Results suggest that the angry God schema is associated with this form of cultural racism for White people generally as well as White Evangelicals, yet for Black Protestants, belief in an angry God is associated with resistance against this type of cultural racism.

Discussion

Despite empirical evidence demonstrating the persistence of implicit bias in policing and institutional racism within law enforcement, the public continues to be divided on how to interpret police treatment of Black persons. This study uncovers an association between religious narrative schema, such as image of God, and one's attitude toward this social issue as well as how complex religion at the intersection of race and religious affiliation may impact the direction of this association between an angry God image and police treatment of Black persons. Our findings confirm that an angry God image is modestly associated with the narrative that police shoot Blacks more than Whites because Blacks are more violent than Whites. Even when controlling for other religious, political, and demographic factors, the association holds. While angry God is not the only factor or the most influential, our results suggests that it does work as a distinct factor in this understanding of police treatment of Black persons. Previous research supports this finding since the narrative that police shoot Blacks more because Blacks are more violent than Whites is based on punitive ideology. But whose version of the story is this telling?

Due to large White samples in most survey research, we contend that previous research has undertheorized the role that race plays in the association between angry God and punitive attitudes, and as a result, this research has likely inadvertently privileged a White narrative of angry God. Using the insights of critical quantitative methodology and intersectionality, the inclusion of interactions of angry God image with racialized identity as well as racialized religious traditions creates space for the telling of counternarratives regarding angry God image and the view that police shoot Blacks more than Whites because Blacks are more violent than Whites. The first interaction introduced assesses if racialized identity moderates the angry God effect. Although the interaction term for racialized identity and angry God is not significant, the predicted probabilities and average marginal effects elucidate a trend worth noting. While angry God image has no effect for Black respondents, it has a notable positive trend for White respondents, and this difference is pronounced on the higher half of the angry God scale. This supports our claim that past research has treated angry God image as a colorblind concept, yet this positive association between angry God and punitive criminal justice is raced, specifically raced White.

Here is a summary:

The article explores the relationship between image of God (IoG) and cultural racism in relation to police treatment of African Americans. The authors argue that IoG can be a source of cultural racism, which is a form of racism that is embedded in the culture of a society. They suggest that people who hold an angry IoG are more likely to believe that African Americans are dangerous and violent, and that this belief can lead to discriminatory treatment by police.

Here are some of the key points from the article:
  • Image of God (IoG) can be a source of cultural racism.
  • People who hold an angry IoG are more likely to believe that African Americans are dangerous and violent.
  • This belief can lead to discriminatory treatment by police.
  • Interventions that address IoG could be an effective way to reduce racism and discrimination.

Friday, July 28, 2023

Humans, Neanderthals, robots and rights

Mamak, K.
Ethics Inf Technol 24, 33 (2022).

Abstract

Robots are becoming more visible parts of our life, a situation which prompts questions about their place in our society. One group of issues that is widely discussed is connected with robots’ moral and legal status as well as their potential rights. The question of granting robots rights is polarizing. Some positions accept the possibility of granting them human rights whereas others reject the notion that robots can be considered potential rights holders. In this paper, I claim that robots will never have all human rights, even if we accept that they are morally equal to humans. I focus on the role of embodiment in the content of the law. I claim that even relatively small differences in the ontologies of entities could lead to the need to create new sets of rights. I use the example of Neanderthals to illustrate that entities similar to us might have required different legal statuses. Then, I discuss the potential legal status of human-like robots.

Conclusions

The place of robots in the law universe depends on many things. One is our decision about their moral status, but even if we accept that some robots are equal to humans, this does not mean that they have the same legal status as humans. Law, as a human product, is tailored to a human being who has a body. Embodiment impacts the content of law, and entities with different ontologies are not suited to human law. As discussed here, Neanderthals, who are very close to us from a biological point of view, and human-like robots cannot be counted as humans by law. Doing so would be anthropocentric and harmful to such entities because it could ignore aspects of their lives that are important for them. It is certain that the current law is not ready for human-like robots.


Here is a summary: 

In terms of robot rights, one factor to consider is the nature of robots. Robots are becoming increasingly sophisticated, and some experts believe that they may eventually become as intelligent as humans. If this is the case, then it is possible that robots could deserve the same rights as humans.

Another factor to consider is the relationship between humans and robots. Humans have a long history of using animals, and some people argue that robots are simply another form of animal. If this is the case, then it is possible that robots do not deserve the same rights as humans.
  • The question of robot rights is a complex one, and there is no easy answer.
  • The nature of robots and the relationship between humans and robots are two important factors to consider when thinking about robot rights.
  • It is important to start thinking about robot rights now, before robots become too sophisticated.

Thursday, July 27, 2023

Supervisees’ Perspectives of Inadequate, Harmful, and Exceptional Clinical Supervision: Are We Listening?

Hutman, H., Ellis, M. V., et al (2023).
The Counseling Psychologist, 001100002311725.

Abstract

Supervisees’ experiences in supervision vary remarkably. To capture such variability, Ellis and colleagues offered a framework for understanding and assessing inadequate, harmful, and exceptional supervision. Although their framework was supported, it did not offer a nuanced understanding of these supervision experiences. Using consensual qualitative research–modified, this study sought to obtain a rich description of inadequate, harmful, and exceptional supervision. Participants (N = 135) were presented with definitions and provided responses (n = 156) to open-ended questions describing their inadequate (n = 63), harmful (n = 30), and/or exceptional (n = 63) supervision experiences. Supervisees reporting harmful experiences described supervisors as neglectful and callous, whereas inadequate supervision reflected inappropriate feedback, unavailability, and unresponsiveness. Conversely, exceptional supervision involved safety, clinical paradigm shifts, and modeling specific techniques or theories. Implications for supervision research, theory, and practice are discussed.

 Significance of the Scholarship to the Public

We identified themes from trainees’ descriptions of their inadequate, harmful, and exceptional experiences in clinical supervision. The findings offer a nuanced understanding of supervisees’ lived experiences,
illustrating how clinical supervisors went awry or went above and beyond, and suggesting strategies for promoting exceptional supervision and preventing harmful and inadequate supervision.

Here is a summary

Background: Clinical supervision is a critical component of professional development for mental health professionals. However, not all supervision is created equal. Some supervision can be inadequate, harmful, or exceptional.

Research question: The authors of this article investigated supervisees' perspectives of inadequate, harmful, and exceptional clinical supervision.

Methods: The authors conducted a qualitative study with 135 supervisees. They asked supervisees to describe their experiences of inadequate, harmful, and exceptional supervision.

Results: The authors found that supervisees' experiences of inadequate, harmful, and exceptional supervision varied widely. However, there were some common themes that emerged. For example, supervisees who experienced inadequate supervision often felt unsupported, neglected, and judged. Supervisees who experienced harmful supervision often felt traumatized, humiliated, and disempowered. Supervisees who experienced exceptional supervision often felt supported, challenged, and empowered.

Conclusions: The authors concluded that supervisees' experiences of clinical supervision can have a profound impact on their professional development. They suggest that we need to listen to supervisees' experiences of supervision and to take steps to ensure that all supervisees receive high-quality supervision.

Wednesday, July 26, 2023

Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions

Krügel, S., Ostermaier, A. & Uhl, M.
Philos. Technol. 35, 17 (2022).

Abstract

Departing from the claim that AI needs to be trustworthy, we find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data and when they learn information about it that warrants distrust. We conducted online experiments where the subjects took the role of decision-makers who received advice from an algorithm on how to deal with an ethical dilemma. We manipulated the information about the algorithm and studied its influence. Our findings suggest that AI is overtrusted rather than distrusted. We suggest digital literacy as a potential remedy to ensure the responsible use of AI.

Summary

Background: Artificial intelligence (AI) is increasingly being used to make ethical decisions. However, there is a concern that AI-powered advisors may not be trustworthy, due to factors such as bias and opacity.

Research question: The authors of this article investigated whether humans trust AI-powered advisors for ethical decisions, even when they know that the advisor is untrustworthy.

Methods: The authors conducted a series of experiments in which participants were asked to make ethical decisions with the help of an AI advisor. The advisor was either trustworthy or untrustworthy, and the participants were aware of this.

Results: The authors found that participants were more likely to trust the AI advisor, even when they knew that it was untrustworthy. This was especially true when the advisor was able to provide a convincing justification for its advice.

Conclusions: The authors concluded that humans are susceptible to "zombie trust" in AI-powered advisors. This means that we may trust AI advisors even when we know that they are untrustworthy. This is a concerning finding, as it could lead us to make bad decisions based on the advice of untrustworthy AI advisors.  By contrast, decision-makers do disregard advice from a human convicted criminal.

The article also discusses the implications of these findings for the development and use of AI-powered advisors. The authors suggest that it is important to make AI advisors more transparent and accountable, in order to reduce the risk of zombie trust. They also suggest that we need to educate people about the potential for AI advisors to be untrustworthy.

Tuesday, July 25, 2023

Inside the DeSantis Doc That Showtime Didn’t Want You to See

Roger Sollenberger
The Daily Beast
Originally posted 23 July 23

Here are two excerpts:

The documentary contrasts DeSantis’ account with those of two anonymous ex-prisoners, whom the transcript indicated were not represented in the flesh; their claims were delivered in “voice notes.”

“Officer DeSantis was one of the officers who oversaw the force-feeding and torture we were subjected to in 2006,” one former prisoner said. The second former detainee claimed that DeSantis was “one of the officers who mistreated us,” adding that DeSantis was “a bad person” and “a very bad officer.”

Over a view of “Camp X-Ray”—the now-abandoned section of Gitmo where DeSantis was stationed but has since fallen into disrepair—the narrator revealed that a VICE freedom of information request for the Florida governor’s active duty record returned “little about Guantanamo” outside of his arrival in March 2006.

But as the documentary noted, that period was “a brutal point in the prison’s history.”

Detainees had been on a prolonged hunger strike to call attention to their treatment, and the government’s solution was to force-feed prisoners Ensure dietary supplements through tubes placed in their noses. Detainees alleged the process caused excessive bleeding and was repeated “until they vomited and defecated on themselves.” (DeSantis, a legal adviser, would almost certainly have been aware that the UN concluded that force-feeding amounted to torture the month before he started working at Guantanamo.)

(cut)

The transcript then presented DeSantis’ own 2018 account of his role in the forced-feedings, when he told CBS News Miami that he had personally and professionally endorsed force-feeding as a legal way to break prisoner hunger strikes.

“The commander wants to know, well how do I combat this? So one of the jobs as a legal adviser will be like, ‘Hey, you actually can force feed, here’s what you can do, here’s kinda the rules of that,’” DeSantis said at the time.

DeSantis altered that language in a Piers Morgan interview this March, again invoking his junior rank as evidence that he would have lacked standing to order forced-feeding.

“There may have been a commander that would have done feeding if someone was going to die, but that was not something that I would have even had authority to do,” he said. However, DeSantis did not deny that he had provided that legal advice.


My thoughts:

I would like the see the documentary and make my own decision about its veracity.
  • The decision by Showtime to pull the episode is a significant one, as it suggests that the network is willing to censor its programming in order to avoid political controversy.
  • This is a worrying development, as it raises questions about the future of independent journalism in the United States.
  • If news organizations are afraid to air stories that are critical of powerful figures, then it will be much more difficult for the public to hold those figures accountable.
  • I hope that Showtime will reconsider its decision and allow the episode to air. The public has a right to know about the allegations against DeSantis, and it is important that these allegations be given a fair hearing.

Monday, July 24, 2023

How AI can distort human beliefs

Kidd, C., & Birhane, A. (2023, June 23).
Science, 380(6651), 1222-1223.
doi:10.1126/science. adi0248

Here is an excerpt:

Three core tenets of human psychology can help build a bridge of understanding about what is at stake when discussing regulation and policy options. These ideas in psychology can connect to machine learning but also those in political science, education, communication, and the other fields that are considering the impact of bias and misinformation on population-level beliefs.

People form stronger, longer-lasting beliefs when they receive information from agents that they judge to be confident and knowledgeable, starting in early childhood. For example, children learned better when they learned from an agent who asserted their knowledgeability in the domain as compared with one who did not (5). That very young children track agents’ knowledgeability and use it to inform their beliefs and exploratory behavior supports the theory that this ability reflects an evolved capacity central to our species’ knowledge development.

Although humans sometimes communicate false or biased information, the rate of human errors would be an inappropriate baseline for judging AI because of fundamental differences in the types of exchanges between generative AI and people versus people and people. For example, people regularly communicate uncertainty through phrases such as “I think,” response delays, corrections, and speech disfluencies. By contrast, generative models unilaterally generate confident, fluent responses with no uncertainty representations nor the ability to communicate their absence. This lack of uncertainty signals in generative models could cause greater distortion compared with human inputs.

Further, people assign agency and intentionality readily. In a classic study, people read intentionality into the movements of simple animated geometric shapes (6). Likewise, people commonly read intentionality— and humanlike intelligence or emergent sentience—into generative models even though these attributes are unsubstantiated (7). This readiness to perceive generative models as knowledgeable, intentional agents implies a readiness to adopt the information that they provide more rapidly and with greater certainty. This tendency may be further strengthened because models support multimodal interactions that allow users to ask models to perform actions like “see,” “draw,” and “speak” that are associated with intentional agents. The potential influence of models’ problematic outputs on human beliefs thus exceeds what is typically observed for the influence of other forms of algorithmic content suggestion such as search. These issues are exacerbated by financial and liability interests incentivizing companies to anthropomorphize generative models as intelligent, sentient, empathetic, or even childlike.


Here is a summary of solutions that can be used to address the problem of AI-induced belief distortion. These solutions include:

Transparency: AI models should be transparent about their biases and limitations. This will help people to understand the limitations of AI models and to be more critical of the information that they generate.

Education: People should be educated about the potential for AI models to distort beliefs. This will help people to be more aware of the risks of using AI models and to be more critical of the information that they generate.

Regulation: Governments could regulate the use of AI models to ensure that they are not used to spread misinformation or to reinforce existing biases.

Sunday, July 23, 2023

How to Use AI Ethically for Ethical Decision-Making

Demaree-Cotton, J., Earp, B. D., & Savulescu, J.
(2022). American Journal of Bioethics, 22(7), 1–3.

Here is an excerpt:

The  kind  of AI  proposed  by  Meier  and  colleagues  (2022) has  the  fascinating  potential to improve the transparency of ethical decision-making, at least if it is used as a decision aid rather than a decision replacement (Savulescu & Maslen 2015). While artificial intelligence cannot itself engage in the human communicative process of justifying its decisions to patients, the AI they describe (unlike “black-box” AI) makes explicit which values and principles are involved and how much weight they are given.

By contrast, the moral principles or values underlying human moral intuition are not always consciously, introspectively accessible (Cushman, Young, and  Hauser  2006).  While humans sometimes have a fuzzy, intuitive sense of some of the factors that are relevant to their moral judgment, we often have strong moral intuitions without being sure of their source,  or  with- out being clear on precisely how strongly different  factors  played  a  role in  generating  the intuitions.  But  if  clinicians  make use  of  the AI  as a  decision  aid, this  could help  them  to transparently and precisely communicate the actual reasons behind their decision.

This is so even if the AI’s recommendation is ultimately rejected. Suppose, for example, that the AI recommends a course of action, with a certain amount of confidence, and it specifies the exact values or  weights it has assigned  to  autonomy versus  beneficence  in coming  to  this conclusion. Evaluating the recommendation made by the AI could help a committee make more explicit the “black box” aspects  of their own reasoning.  For example, the committee might decide that beneficence should actually be  weighted more heavily in this case than the AI suggests. Being able to understand the reason that their decision diverges from that of the AI gives them the opportunity to offer a further justifying reason as to why they think beneficence should be given more weight;  and  this,  in  turn, could improve the  transparency of their recommendation. 

However, the potential for the kind of AI described in the target article to improve the accuracy of moral decision-making may be more limited. This is so for two reasons. Firstly, whether AI can be expected to outperform human decision-making depends in part on the metrics used to train it. In non-ethical domains, superior accuracy can be achieved because the “verdicts” given to the AI in the training phase are not solely the human judgments that the AI is intended to replace or inform. Consider how AI can learn to detect lung cancer from scans at a superior rate to human radiologists after being trained on large datasets and being “told” which scans show cancer and which ones are cancer-free. Importantly, this training includes cases where radiologists  did  not  recognize  cancer  in  the  early  scans  themselves,  but  where  further information verified the correct diagnosis later on (Ardila et al. 2019). Consequently, these AIs are  able  to  detect  patterns  even in  early  scans  that  are  not  apparent  or  easily detectable  by human radiologists, leading to superior accuracy compared to human performance.