Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, July 24, 2024

“Abuser” or “Tough Love” Boss?: The moderating role of leader performance in shaping the labels employees use in response to abusive supervision

Lount, R. B., Choi, W., & Tepper, B. J. (2024).
Organizational Behavior and
Human Decision Processes, 183, 104339.

Abstract

We invoke leader categorization theory and labeling theory to examine the circumstances under which individuals come to perceive their managerial leaders as “abusers” or “tough love” bosses. In a field study, we show that leader performance moderates the relationship between a leader’s abusive supervision and the degree to which their followers label them as an abuser or a tough love leader. Heightened leader performance lowers the willingness to label the leader as an “abuser” while increasing one’s labeling the leader as a “tough love” boss. This study also documents that leader performance moderates the indirect effect between abusive supervision and upward hostility (through abuser labeling) and the indirect effect between abusive supervision and positive career expectations (through tough love labeling). In a follow-up experiment, we again document that leader performance moderates the relationship between abusive supervision and the degree to which followers label their leaders as an abuser. Additionally, we provide support for a moderated indirect effect on a range of negative behavioral outcomes directed toward the leader through abuser labeling. We discuss the studies’ implications for theory, future research, and practice pertaining to abusive supervision.

Highlights

• Leader performance moderates labeling leaders who display abusive supervision.

• High leader performance weakens abuser labeling following abusive supervision.

• High leader performance strengthens tough love labeling following abusive supervision.

• Abuser labeling promotes upward hostility toward supervisor.

• Tough love labeling promotes increased career expectations.

Tuesday, July 23, 2024

The ethics of personalised digital duplicates: a minimally viable permissibility principle

Danaher, J., Nyholm, S.
AI Ethics (2024).

Abstract

With recent technological advances, it is possible to create personalised digital duplicates. These are partial, at least semi-autonomous, recreations of real people in digital form. Should such duplicates be created? When can they be used? This article develops a general framework for thinking about the ethics of digital duplicates. It starts by clarifying the object of inquiry– digital duplicates themselves– defining them, giving examples, and justifying the focus on them rather than other kinds of artificial being. It then identifies a set of generic harms and benefits associated with digital duplicates and uses this as the basis for formulating a minimally viable permissible principle (MVPP) that stipulates widely agreeable conditions that should be met in order for the creation and use of digital duplicates to be ethically permissible. It concludes by assessing whether it is possible for those conditions to be met in practice, and whether it is possible for the use of digital duplicates to be more or less permissible.

Here are some thoughts:

Artificial intelligence advancements are making digital duplicates, recreations of real people in digital form, a more realistic possibility. This presentation explores the ethical considerations surrounding this new technology. The text defines "personalized digital duplicates" and clarifies how they differ from other AI creations.

A key concept introduced in the text is the "minimally viable permissible principle" (MVPP). This framework can be used to assess the ethics of creating and using digital duplicates in specific situations. The MVPP considers factors such as informed consent, potential benefits and harms, transparency, and whether the real person's presence is truly necessary.

The text acknowledges that the MVPP doesn't determine if creating a specific digital duplicate is a good idea, only if it's ethically permissible. Additionally, the authors recognize that permissibility can exist on a spectrum. There will be situations where creating a digital duplicate is clearly permissible, while others may be ethically questionable. The text concludes by calling for further research to weigh the potential benefits and harms of this technology.

Just brainstorming wildly here: The creation of a therapist's digital duplicate could provide wider access to psychotherapy services and enhance revenue, but ethical considerations around confidentiality, transparency, standards of care, psychologist responsibility, a lack of outcome data, safety, risk management, and the therapeutic relationship would need to be addressed.

Monday, July 22, 2024

Communal Narcissism and Sadism as Predictors of Everyday Vigilantism

Chen, F. X., Ok, E., & Aquino, K. (2023).
Personality Science, 4(1).

Abstract

Vigilantes monitor their social environment for signs of wrongdoing and administer unauthorized punishment on those who they perceive to be violating laws, social norms, or moral standards. We investigated whether the willingness to become a vigilante can be predicted by grandiose self-perceptions about one's communality (communal narcissism) and enjoyment of cruelty (sadism). As hypothesized, findings demonstrated both variables to be positively related to becoming a vigilante as measured by reports of past and anticipated vigilante behavior (Study 1) and by dispositional tendencies toward vigilantism (Studies 1 and 2). We also found communal narcissism and sadism predicted the perceived effectiveness of vigilante actions exhibited by others (Study 2) and the intention to engage in vigilantism after witnessing a norm violation (Study 3). Finally, Study 3 also demonstrated that the tendency for communal narcissists and sadists to become a vigilante might vary based on the expected consequences of the observed norm violation.

Relevance Statement

A prosocial orientation and cruelty seem antithetical. However, our results showed that these traits may converge in predicting individuals’ tendency to become a vigilante, marked by imposing unauthorized punishments on others.

Key Insights
  • We study factors that predict willingness to become a vigilante.
  • We found that communal narcissism predicted vigilante tendencies.
  • Sadism was also a significant predictor of vigilantism.
  • Effects hold even after controlling for demographic covariates.

Some thoughts as a clinical psychologist

This research on communal narcissism and sadism as factors in vigilantism is interesting from a clinical perspective. It sheds light on the motivations behind individuals who take justice into their own hands, often in ways that can be harmful.

The focus on communal narcissism, where people believe their group is superior and deserves special treatment, resonates with our understanding of in-group/out-group dynamics. These individuals might see themselves as righteous defenders of their community's morals, justifying their aggressive actions.

The link to sadism, the enjoyment of inflicting suffering, suggests a darker side to vigilantism. It's important to consider how a desire for control or even punishment might fuel some vigilante behavior, potentially escalating situations and overshadowing any sense of justice.

Sunday, July 21, 2024

Crying wolf: Warning about societal risks can be reputationally risky

Caviola, L., Coleman, M. B., 
Winter, C., & Lewis, J. (2024, June 14).

Abstract

Society relies on expert warnings about large-scale risks like pandemics and natural disasters. Across ten studies (N = 5,342), we demonstrate people’s reluctance to warn about unlikely but large-scale risks because they are concerned about being blamed for being wrong. In particular, warners anticipate that if the risk doesn’t occur, they will be perceived as overly alarmist and responsible for wasting societal resources. This phenomenon appears in the context of natural, technological, and financial risks and in US and Chinese samples, local policymakers, AI researchers, and legal experts. The reluctance to warn is aggravated when the warner will be held epistemically responsible, such as when they are the only warner and when the risk is speculative, lacking objective evidence. A remedy is offering anonymous expert warning systems. Our studies emphasize the need for societal risk management policies to consider psychological biases and social incentives.


Here are some thoughts:

The research on the "crying wolf" phenomenon is crucial for clinical psychologists as it delves into the psychological and social dynamics of risk communication and trust. Clinical psychologists often work with individuals and communities to manage anxiety and stress related to perceived threats.

Understanding how repeated false alarms can lead to desensitization and reduced trust in warnings helps psychologists develop better strategies for communicating risks without causing undue alarm or complacency. This knowledge is particularly relevant in therapeutic settings where clients may struggle with anxiety disorders exacerbated by frequent, yet unfounded, warnings about societal risks.

Moreover, the study highlights the reputational risks faced by those who issue warnings, which can be a significant concern for mental health professionals who must balance the need to alert clients to potential dangers with the risk of being perceived as alarmist. This balance is critical in maintaining the therapeutic alliance and ensuring that clients continue to trust and follow professional advice. By understanding the dynamics of the "crying wolf" effect, clinical psychologists can better navigate these challenges, ensuring that their warnings are taken seriously without causing unnecessary panic or skepticism.

Finally, the research underscores the importance of effective communication strategies in mitigating the negative impacts of the "crying wolf" effect. Clinical psychologists can apply these insights to improve their own communication with clients, particularly in crisis situations. By adopting evidence-based approaches to risk communication, psychologists can help clients make informed decisions, reduce anxiety, and foster resilience. This is especially pertinent in the context of public health crises, natural disasters, and other scenarios where accurate and trusted communication can significantly impact mental health outcomes.

Saturday, July 20, 2024

The Supreme Court upholds the conviction of woman who challenged expert testimony in a drug case

Lindsay Whitehurst
apnews.com
originally posted 20 June 24

The Supreme Court on Thursday upheld the conviction of a California woman who said she did not know about a stash of methamphetamine hidden inside her car.

In a ruling that crossed the court’s ideological lines, the 6-3 majority opinion dismissed arguments that an expert witness for the prosecution had gone too far in describing the woman’s mindset when he said that most larger scale drug couriers are aware of what they are transporting.

“An opinion about most couriers is not an opinion about all couriers,” said Justice Clarence Thomas, who wrote the decision. He was joined by fellow conservatives Chief Justice John Roberts, Justices Samuel Alito, Brett Kavanaugh and Amy Coney Barrett as well as liberal Justice Ketanji Brown Jackson.

In a sharp dissent, conservative Justice Neil Gorsuch wrote that the ruling gives the government a “powerful new tool in its pocket.”

“Prosecutors can now put an expert on the stand — someone who apparently has the convenient ability to read minds — and let him hold forth on what ‘most’ people like the defendant think when they commit a legally proscribed act. Then, the government need do no more than urge the jury to find that the defendant is like ‘most’ people and convict,” he wrote. Joining him were the court’s other liberal justices, Sonia Sotomayor and Elena Kagan.


Here are some thoughts:

The recent Supreme Court case involving a woman convicted of drug trafficking highlights a complex issue surrounding expert testimony, particularly for psychologists. In this case, the prosecution's expert offered an opinion on the general awareness of large-scale drug couriers, which the defense argued unfairly portrayed the defendant's mindset. While the Court allowed the testimony, it leaves some psychologists concerned.

The potential for expert testimony to blur the lines between general patterns and specific defendant behavior is a worry. Psychologists strive to present nuanced assessments based on individual cases. This ruling might incentivize broader generalizations, which could risk prejudicing juries against defendants. It's crucial to find a balance between allowing experts to provide helpful insights and ensuring they don't overstep into determining a defendant's guilt.

Moving forward, psychologists offering expert testimony may need to tread carefully.  They should ensure their testimony focuses on established psychological principles and avoids commenting on a specific defendant's knowledge or intent. This case underscores the importance of clear guidelines for expert witnesses to uphold the integrity of the justice system.

Friday, July 19, 2024

Turn a Kind Eye-Offering Positive Reframing

Menzin E. R. (2024).
JAMA internal medicine.
Advance online publication.
https://doi.org/10.1001/jamainternmed.2024.2379

Here is an excerpt:

I have seen many patients struggle with anxiety and obsessive-compulsive disorder. I firmly believe in my obligation to connect them with evidence-based therapy and offer pharmacologic treatment. The cognitive behavioral therapy technique of reframing, which is so useful for treating anxiety disorders, can serve as a useful lens for the disorder itself.1 I try to reframe by pointing out their ability to see patterns and the strengths intrinsic to this nonneurotypical brain. Perhaps with this mindset, they can look at their behaviors with grace.

Physicians are problem solvers by nature and training. When faced with symptoms, we tend to go directly for the cure. When there are no or only suboptimal solutions, we tend to offer sympathy instead of strategy. Rather than apologize, we can reframe build the scaffolding to allow patients to change their thought patterns. As with all strategies, this is not universally applicable. You cannot positively reframe a life-threatening diagnosis; to do so insults and minimizes the patient's distress. There are times to sit with patients as their house crumbles, and there are times to help them reframe the chaos.

Recently, I saw this done in another unlikely corner. I took my 89-year-old father to discuss a shoulder replacement with the orthopedist. "Hang on," the surgeon exclaimed enthusiastically, holding his capable hands in the air. "Before we talk about the surgical options, you tore your rotator cuff skiing Killington at 86? That's amazing!" With that phrase, he reframed my father's injury from the frailty of old age to a badge of athletic honor (though he never saw my dad ski). It does not change my father's difficult decision to live with the tear or a grueling repair. Yet as he uses his right hand to lift his left arm, perhaps he will think of the 50 years of skiing or the feeling of fresh snow beneath his skis. Instead of feeling angry, I now watch him maneuver that arm and recall the family ski trips, the children and grandchildren he taught to ski. Sometimes, we all need kind eyes.


Here are some thoughts: 

The article explores the concept of well-being as a complex interplay between internal mental states and external socio-cultural factors. It proposes a holistic view, emphasizing the importance of both internal and external influences on happiness.

The article likely discusses strategies for positive reframing, which involves shifting negative interpretations of situations or experiences towards a more positive perspective. This reframing could be applied to both internal thoughts and emotions, as well as external circumstances.

"Turning a kind eye" is a metaphor for adopting a positive and understanding perspective towards oneself and one's environment, ultimately contributing to greater well-being.

Thursday, July 18, 2024

Far-right extremist groups show surging growth, new annual study shows

Will Carless
USAToday.com
Originally published 7 June 24

Far-right extremist groups are actively working to undermine U.S. democracy and are organizing in record numbers, according to an annual report from the Southern Poverty Law Center. Meanwhile, extremist groups have been targeting faith-based groups that assist migrants on the U.S.-Mexico border, and a New Jersey state trooper is fired for having a racist tattoo.

It’s the week in extremism.

Far-right extremists suffered a blow in the wake of the Jan. 6 insurrection. More than 1,000 people were charged and key leaders were imprisoned, some for decades. But a new annual report from the Southern Poverty Law Center suggests the far-right has regrouped and is taking aim at democratic institutions across the country. 


The Year in Hate and Extremism from the Southern Poverty Law Center.

Here are some thoughts:

A new study highlighting the surge in far-right extremism holds significant weight for psychologists working with marginalized groups. This growth presents a heightened risk of threats and violence for these communities. Psychologists can play a vital role by understanding the vulnerabilities extremists prey on, fostering resilience in marginalized groups, and promoting social cohesion to counter extremist narratives. By acknowledging this trend, psychologists can equip themselves to better support the mental health of these vulnerable populations.

Wednesday, July 17, 2024

“I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded

By Sigal Samuel
vox.com
Originally posted 18 May 24

For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the company’s superalignment team — the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity. 

They’re not the only ones who’ve left. Since last November — when OpenAI’s board tried to fire CEO Sam Altman only to see him quickly claw his way back to power — at least five more of the company’s most safety-conscious employees have either quit or been pushed out. 

What’s going on here?

If you’ve been following the saga on social media, you might think OpenAI secretly made a huge technological breakthrough. The meme “What did Ilya see?” speculates that Sutskever, the former chief scientist, left because he saw something horrifying, like an AI system that could destroy humanity. 

But the real answer may have less to do with pessimism about technology and more to do with pessimism about humans — and one human in particular: Altman. According to sources familiar with the company, safety-minded employees have lost faith in him.


Here are some thoughts:

The OpenAI team's reported issues expose critical ethical concerns in AI development. A potential misalignment of values emerges when profit or technological advancement overshadows safety and ethical considerations. Businesses must strive for transparency, prioritizing human well-being and responsible innovation throughout the development process.

Prioritizing AI Safety

The departure of the safety team underscores the need for robust safeguards. Businesses developing AI should dedicate resources to mitigating risks like bias and misuse. Strong ethical frameworks and oversight committees can ensure responsible development.

Employee Concerns and Trust

The article hints at a lack of trust within OpenAI. Businesses must foster open communication by addressing employee concerns about project goals, risks, and ethics. Respecting employee rights to raise ethical concerns is crucial for maintaining trust and responsible AI development.

By prioritizing ethical considerations, aligning values, and fostering transparency, businesses can navigate the complexities of AI development and ensure their creations benefit humanity.

Tuesday, July 16, 2024

Robust and interpretable AI-guided marker for early dementia prediction in real-world clinical settings

Lee, L. Y., et al. (2024).
EClinicalMedicine, 102725.

Background

Predicting dementia early has major implications for clinical management and patient outcomes. Yet, we still lack sensitive tools for stratifying patients early, resulting in patients being undiagnosed or wrongly diagnosed. Despite rapid expansion in machine learning models for dementia prediction, limited model interpretability and generalizability impede translation to the clinic.

Methods

We build a robust and interpretable predictive prognostic model (PPM) and validate its clinical utility using real-world, routinely-collected, non-invasive, and low-cost (cognitive tests, structural MRI) patient data. To enhance scalability and generalizability to the clinic, we: 1) train the PPM with clinically-relevant predictors (cognitive tests, grey matter atrophy) that are common across research and clinical cohorts, 2) test PPM predictions with independent multicenter real-world data from memory clinics across countries (UK, Singapore).

Interpretation

Our results provide evidence for a robust and explainable clinical AI-guided marker for early dementia prediction that is validated against longitudinal, multicenter patient data across countries, and has strong potential for adoption in clinical practice.


Here is a summary and some thoughts:

Cambridge scientists have developed an AI tool capable of predicting with high accuracy whether individuals with early signs of dementia will remain stable or develop Alzheimer’s disease. This tool utilizes non-invasive, low-cost patient data such as cognitive tests and MRI scans to make its predictions, showing greater sensitivity than current diagnostic methods. The algorithm was able to correctly identify 82% of individuals who would develop Alzheimer’s and 81% of those who wouldn’t, surpassing standard clinical markers. This advancement could reduce the reliance on invasive and costly diagnostic tests and allow for early interventions, potentially improving treatment outcomes.

The machine learning model stratifies patients into three groups: those whose symptoms remain stable, those who progress slowly to Alzheimer’s, and those who progress rapidly. This stratification could help clinicians tailor treatments and closely monitor high-risk individuals. Validated with real-world data from memory clinics in the UK and Singapore, the tool demonstrates its applicability in clinical settings. The researchers aim to extend this model to other forms of dementia and incorporate additional data types, with the ultimate goal of providing precise diagnostic and treatment pathways, thereby accelerating the discovery of new treatments for dementia.