Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, June 18, 2023

Gender-Affirming Care for Trans Youth Is Neither New nor Experimental: A Timeline and Compilation of Studies

Julia Serano
Medium.com
Originally posted 16 May 23

Trans and gender-diverse people are a pancultural and transhistorical phenomenon. It is widely understood that we, like LGBTQ+ people more generally, arise due to natural variation rather than the result of pathology, modernity, or the latest conspiracy theory.

Gender-affirming healthcare has a long history. The first trans-related surgeries were carried out in the 1910s–1930s (Meyerowitz, 2002, pp. 16–21). While some doctors were supportive early on, most were wary. Throughout the mid-twentieth century, these skeptical doctors subjected trans people to all sorts of alternate treatments — from perpetual psychoanalysis, to aversion and electroshock therapies, to administering assigned-sex-consistent hormones (e.g., testosterone for trans female/feminine people), and so on — but none of them worked. The only treatment that reliably allowed trans people to live happy and healthy lives was allowing them to transition. While doctors were initially worried that many would eventually come to regret that decision, study after study has shown that gender-affirming care has a far lower regret rate (typically around 1 or 2 percent) than virtually any other medical procedure. Given all this, plus the fact that there is no test for being trans (medical, psychological, or otherwise), around the turn of the century, doctors began moving away from strict gatekeeping and toward an informed consent model for trans adults to attain gender-affirming care.

Trans children have always existed — indeed most trans adults can tell you about their trans childhoods. During the twentieth century, while some trans kids did socially transition (Gill-Peterson, 2018), most had their gender identities disaffirmed, either by parents who disbelieved them or by doctors who subjected them to “gender reparative” or “conversion” therapies. The rationale behind the latter was a belief at that time that gender identity was flexible and subject to change during early childhood, but we now know that this is not true (see e.g., Diamond & Sigmundson, 1997; Reiner & Gearhart, 2004). Over the years, it became clear that these conversion efforts were not only ineffective, but they caused real harm — this is why most health professional organizations oppose them today.

Given the harm caused by gender-disaffirming approaches, around the turn of the century, doctors and gender clinics began moving toward what has come to be known as the gender affirmative model — here’s how I briefly described this approach in my 2016 essay Detransition, Desistance, and Disinformation: A Guide for Understanding Transgender Children Debates:

Rather than being shamed by their families and coerced into gender conformity, these children are given the space to explore their genders. If they consistently, persistently, and insistently identify as a gender other than the one they were assigned at birth, then their identity is respected, and they are given the opportunity to live as a member of that gender. If they remain happy in their identified gender, then they may later be placed on puberty blockers to stave off unwanted bodily changes until they are old enough (often at age sixteen) to make an informed decision about whether or not to hormonally transition. If they change their minds at any point along the way, then they are free to make the appropriate life changes and/or seek out other identities.

Saturday, June 17, 2023

Debt Collectors Want To Use AI Chatbots To Hustle People For Money

Corin Faife
vice.com
Originally posted 18 MAY 23

Here are two excerpts:

The prospect of automated AI systems making phone calls to distressed people adds another dystopian element to an industry that has long targeted poor and marginalized people. Debt collection and enforcement is far more likely to occur in Black communities than white ones, and research has shown that predatory debt and interest rates exacerbate poverty by keeping people trapped in a never-ending cycle. 

In recent years, borrowers in the US have been piling on debt. In the fourth quarter of 2022, household debt rose to a record $16.9 trillion according to the New York Federal Reserve, accompanied by an increase in delinquency rates on larger debt obligations like mortgages and auto loans. Outstanding credit card balances are at record levels, too. The pandemic generated a huge boom in online spending, and besides traditional credit cards, younger spenders were also hooked by fintech startups pushing new finance products, like the extremely popular “buy now, pay later” model of Klarna, Sezzle, Quadpay and the like.

So debt is mounting, and with interest rates up, more and more people are missing payments. That means more outstanding debts being passed on to collection, giving the industry a chance to sprinkle some AI onto the age-old process of prodding, coaxing, and pressuring people to pay up.

For an insight into how this works, we need look no further than the sales copy of companies that make debt collection software. Here, products are described in a mix of generic corp-speak and dystopian portent: SmartAction, another conversational AI product like Skit, has a debt collection offering that claims to help with “alleviating the negative feelings customers might experience with a human during an uncomfortable process”—because they’ll surely be more comfortable trying to negotiate payments with a robot instead. 

(cut)

“Striking the right balance between assertiveness and empathy is a significant challenge in debt collection,” the company writes in the blog post, which claims GPT-4 has the ability to be “firm and compassionate” with customers.

When algorithmic, dynamically optimized systems are applied to sensitive areas like credit and finance, there’s a real possibility that bias is being unknowingly introduced. A McKinsey report into digital collections strategies plainly suggests that AI can be used to identify and segment customers by risk profile—i.e. credit score plus whatever other data points the lender can factor in—and fine-tune contact techniques accordingly. 

Friday, June 16, 2023

ChatGPT Is a Plagiarism Machine

Joseph Keegin
The Chronicle
Originally posted 23 MAY 23

Here is an excerpt:

A meaningful education demands doing work for oneself and owning the product of one’s labor, good or bad. The passing off of someone else’s work as one’s own has always been one of the greatest threats to the educational enterprise. The transformation of institutions of higher education into institutions of higher credentialism means that for many students, the only thing dissuading them from plagiarism or exam-copying is the threat of punishment. One obviously hopes that, eventually, students become motivated less by fear of punishment than by a sense of responsibility for their own education. But if those in charge of the institutions of learning — the ones who are supposed to set an example and lay out the rules — can’t bring themselves to even talk about a major issue, let alone establish clear and reasonable guidelines for those facing it, how can students be expected to know what to do?

So to any deans, presidents, department chairs, or other administrators who happen to be reading this, here are some humble, nonexhaustive, first-aid-style recommendations. First, talk to your faculty — especially junior faculty, contingent faculty, and graduate-student lecturers and teaching assistants — about what student writing has looked like this past semester. Try to find ways to get honest perspectives from students, too; the ones actually doing the work are surely frustrated at their classmates’ laziness and dishonesty. Any meaningful response is going to demand knowing the scale of the problem, and the paper-graders know best what’s going on. Ask teachers what they’ve seen, what they’ve done to try to mitigate the possibility of AI plagiarism, and how well they think their strategies worked. Some departments may choose to take a more optimistic approach to AI chatbots, insisting they can be helpful as a student research tool if used right. It is worth figuring out where everyone stands on this question, and how best to align different perspectives and make allowances for divergent opinions while holding a firm line on the question of plagiarism.

Second, meet with your institution’s existing honor board (or whatever similar office you might have for enforcing the strictures of academic integrity) and devise a set of standards for identifying and responding to AI plagiarism. Consider simplifying the procedure for reporting academic-integrity issues; research AI-detection services and software, find one that works best for your institution, and make sure all paper-grading faculty have access and know how to use it.

Lastly, and perhaps most importantly, make it very, very clear to your student body — perhaps via a firmly worded statement — that AI-generated work submitted as original effort will be punished to the fullest extent of what your institution allows. Post the statement on your institution’s website and make it highly visible on the home page. Consider using this challenge as an opportunity to reassert the basic purpose of education: to develop the skills, to cultivate the virtues and habits of mind, and to acquire the knowledge necessary for leading a rich and meaningful human life.

Thursday, June 15, 2023

Moralization and extremism robustly amplify myside sharing

Marie, A, Altay, S., et al.
PNAS Nexus, Volume 2, Issue 4, April 2023.

Abstract

We explored whether moralization and attitude extremity may amplify a preference to share politically congruent (“myside”) partisan news and what types of targeted interventions may reduce this tendency. Across 12 online experiments (N = 6,989), we examined decisions to share news touching on the divisive issues of gun control, abortion, gender and racial equality, and immigration. Myside sharing was systematically observed and was consistently amplified when participants (i) moralized and (ii) were attitudinally extreme on the issue. The amplification of myside sharing by moralization also frequently occurred above and beyond that of attitude extremity. These effects generalized to both true and fake partisan news. We then examined a number of interventions meant to curb myside sharing by manipulating (i) the audience to which people imagined sharing partisan news (political friends vs. foes), (ii) the anonymity of the account used (anonymous vs. personal), (iii) a message warning against the myside bias, and (iv) a message warning against the reputational costs of sharing “mysided” fake news coupled with an interactive rating task. While some of those manipulations slightly decreased sharing in general and/or the size of myside sharing, the amplification of myside sharing by moral attitudes was consistently robust to these interventions. Our findings regarding the robust exaggeration of selective communication by morality and extremism offer important insights into belief polarization and the spread of partisan and false information online.

General discussion

Across 12 experiments (N = 6,989), we explored US participants’ intentions to share true and fake partisan news on 5 controversial issues—gun control, abortion, racial equality, sex equality, and immigration—in social media contexts. Our experiments consistently show that people have a strong sharing preference for politically congruent news—Democrats even more so than Republicans. They also demonstrate that this “myside” sharing is magnified when respondents see the issue as being of “absolute moral importance”, and when they have an extreme attitude on the issue. Moreover, issue moralization was found to amplify myside sharing above and beyond attitude extremity in the majority of the studies. Expanding prior research on selective communication, our work provides a clear demonstration that citizens’ myside communicational preference is powerfully amplified by their moral and political ideology (18, 19, 39–43).

By examining this phenomenon across multiple experiments varying numerous parameters, we demonstrated the robustness of myside sharing and of its amplification by participants’ issue moralization and attitude extremity. First, those effects were consistently observed on both true (Experiments 1, 2, 3, 5a, 6a, 7, and 10) and fake (Experiments 4, 5b, 6b, 8, 9, and 10) news stories and across distinct operationalizations of our outcome variable. Moreover, myside sharing and its amplification by issue moralization and attitude extremity were systematically observed despite multiple manipulations of the sharing context. Namely, those effects were observed whether sharing was done from one's personal or an anonymous social media account (Experiments 5a and 5b), whether the audience was made of political friends or foes (Experiments 6a and 6b), and whether participants first saw intervention messages warning against the myside bias (Experiments 7 and 8), or an interactive intervention warning against the reputational costs of sharing mysided falsehoods (Experiments 9 and 10).

Wednesday, June 14, 2023

Can Robotic AI Systems Be Virtuous and Why Does This Matter?

Constantinescu, M., Crisp, R. 
Int J of Soc Robotics 14, 
1547–1557 (2022).

Abstract

The growing use of social robots in times of isolation refocuses ethical concerns for Human–Robot Interaction and its implications for social, emotional, and moral life. In this article we raise a virtue-ethics-based concern regarding deployment of social robots relying on deep learning AI and ask whether they may be endowed with ethical virtue, enabling us to speak of “virtuous robotic AI systems”. In answering this question, we argue that AI systems cannot genuinely be virtuous but can only behave in a virtuous way. To that end, we start from the philosophical understanding of the nature of virtue in the Aristotelian virtue ethics tradition, which we take to imply the ability to perform (1) the right actions (2) with the right feelings and (3) in the right way. We discuss each of the three requirements and conclude that AI is unable to satisfy any of them. Furthermore, we relate our claims to current research in machine ethics, technology ethics, and Human–Robot Interaction, discussing various implications, such as the possibility to develop Autonomous Artificial Moral Agents in a virtue ethics framework.

Conclusion

AI systems are neither moody nor dissatisfied, and they do not want revenge, which seems to be an important advantage over humans when it comes to making various decisions, including ethical ones. However, from a virtue ethics point of view, this advantage becomes a major drawback. For this also means that they cannot act out of a virtuous character, either. Despite their ability to mimic human virtuous actions and even to function behaviourally in ways equivalent to human beings, robotic AI systems cannot perform virtuous actions in accordance with virtues, that is, rightly or virtuously; nor for the right reasons and motivations; nor through phronesis take into account the right circumstances. And this has the consequence that AI cannot genuinely be virtuous, at least not with the current technological advances supporting their functional development. Nonetheless, it might well be that the more we come to know about AI, the less we know about its future.Footnote9 We therefore leave open the possibility of AI systems being virtuous in some distant future. This might, however, require some disruptive, non-linear evolution that includes, for instance, the possibility that robotic AI systems fully deliberate over their own versus others' goals and happiness and make their own choices and priorities accordinglyFootnote10. Indeed, to be a virtuous agent one needs to have the possibility to make mistakes, to reason over virtuous and vicious lines of action. But then this raises a different question: are we prepared to experience interaction with vicious robotic AI systems?

Tuesday, June 13, 2023

Using the Veil of Ignorance to align AI systems with principles of justice

Weidinger, L. McKee, K.R., et al. (2023).
PNAS, 120(18), e2213709120

Abstract

The philosopher John Rawls proposed the Veil of Ignorance (VoI) as a thought experiment to identify fair principles for governing a society. Here, we apply the VoI to an important governance domain: artificial intelligence (AI). In five incentive-compatible studies (N = 2, 508), including two preregistered protocols, participants choose principles to govern an Artificial Intelligence (AI) assistant from behind the veil: that is, without knowledge of their own relative position in the group. Compared to participants who have this information, we find a consistent preference for a principle that instructs the AI assistant to prioritize the worst-off. Neither risk attitudes nor political preferences adequately explain these choices. Instead, they appear to be driven by elevated concerns about fairness: Without prompting, participants who reason behind the VoI more frequently explain their choice in terms of fairness, compared to those in the Control condition. Moreover, we find initial support for the ability of the VoI to elicit more robust preferences: In the studies presented here, the VoI increases the likelihood of participants continuing to endorse their initial choice in a subsequent round where they know how they will be affected by the AI intervention and have a self-interested motivation to change their mind. These results emerge in both a descriptive and an immersive game. Our findings suggest that the VoI may be a suitable mechanism for selecting distributive principles to govern AI.

Significance

The growing integration of Artificial Intelligence (AI) into society raises a critical question: How can principles be fairly selected to govern these systems? Across five studies, with a total of 2,508 participants, we use the Veil of Ignorance to select principles to align AI systems. Compared to participants who know their position, participants behind the veil more frequently choose, and endorse upon reflection, principles for AI that prioritize the worst-off. This pattern is driven by increased consideration of fairness, rather than by political orientation or attitudes to risk. Our findings suggest that the Veil of Ignorance may be a suitable process for selecting principles to govern real-world applications of AI.

From the Discussion section

What do these findings tell us about the selection of principles for AI in the real world? First, the effects we observe suggest that—even though the VoI was initially proposed as a mechanism to identify principles of justice to govern society—it can be meaningfully applied to the selection of governance principles for AI. Previous studies applied the VoI to the state, such that our results provide an extension of prior findings to the domain of AI. Second, the VoI mechanism demonstrates many of the qualities that we want from a real-world alignment procedure: It is an impartial process that recruits fairness-based reasoning rather than self-serving preferences. It also leads to choices that people continue to endorse across different contexts even where they face a self-interested motivation to change their mind. This is both functionally valuable in that aligning AI to stable preferences requires less frequent updating as preferences change, and morally significant, insofar as we judge stable reflectively endorsed preferences to be more authoritative than their nonreflectively endorsed counterparts. Third, neither principle choice nor subsequent endorsement appear to be particularly affected by political affiliation—indicating that the VoI may be a mechanism to reach agreement even between people with different political beliefs. Lastly, these findings provide some guidance about what the content of principles for AI, selected from behind a VoI, may look like: When situated behind the VoI, the majority of participants instructed the AI assistant to help those who were least advantaged.

Monday, June 12, 2023

Why some mental health professionals avoid self-care

Dattilio, F. M. (2023).
Journal of Consulting and Clinical Psychology, 
91(5), 251–253.
https://doi.org/10.1037/ccp0000818

Abstract

This article briefly discusses reasons why some mental health professionals are resistant to self-care. These reasons include the savior complex, avoidance, and lack of collegial assiduity. Several proposed solutions are offered.

Here is an excerpt:

Savior Complex

One hypothesis used to explain professionals’ resistance is what some refer to as a “savior complex.” Certain MHPs may be engaging in the cognitive distortion that it is their duty to save as many people from suffering and demise as they can and in turn need to sacrifice their own psychological welfare for those facing distress. MHPs may be skewed in their thinking that they are also invulnerable to psychological and other stressors. Inherent in this distortion is their fear of being viewed as weak or ineffective, and as a result, they overcompensate by attempting to be stronger than others. This type of thinking may also involve a defense mechanism that develops early in their professional lives and emerges during the course of their work in the field. This may stem from preexisting components of their personality dynamics. 

Another reason may be that the extreme rewards that professionals experience from helping others in such a desperate state of need serve as a euphoric experience for them that can be addictive. In essence, the “high” that they obtain from helping others often spurs them on.
Avoidance

Another less complicated explanation for MHPs’ blindness to their own vulnerabilities may be their strong desire to avoid admitting to their own weaknesses and sense of vulnerability. The defense mechanism of rationalization that they are stronger and healthier than everyone else may embolden them to push on even when there are visible signs to others of the stress in their lives that is compromising their functioning. 

Avoidance is also a way of sidestepping the obvious and putting it off until later. This may be coupled with the need that has increased, particularly with the recent pandemic that has intensified the demand for mental health services.

Denial

The dismissal of MHPs’ own needs or what some may term as, “denial” is a deeper aspect that goes hand-in-hand with cognitive distortions that develop with MHPs, but involve a more complex level of blindness to the obvious (Bearse et al., 2013). It may also serve as a way for professionals to devalue their own emotional and psychological challenges. 

Denial may also stem from an underlying fear of being determined as incapacitated or not up to the challenge by their colleagues and thus prohibited from returning to their work or having to face limitations or restrictions. It can sometimes emanate from the fear of being reported as having engaged in unethical behavior by not seeking assistance sooner. This is particularly so with cases of MHPs who have become involved with illicit drug or alcohol abuse or addiction. 

Most ethical principles mandate that MHPs strive to remain cognizant of the potential effects that their work has on their own physical and mental health status while they are in the process of treating others and to recognize when their ability to be effective has been compromised. 

Last, in some cases, MHPs’ denial can even be a response to genuine and accurately perceived expectations in a variety of work contexts where they do not have control over their schedules. This may occur more commonly with facilities or institutions that do not support the disclosure of vulnerability and stress. It is for the aforementioned reasons that the American and Canadian Psychological Associations as well as other mental health organizations have mandated special education on this topic in graduate training programs (American Psychiatric Association, 2013; Maranzan et al., 2018).

Lack of Collegial Assiduity

A final reason may involve a lack of collegial assiduity, where fellow MHPs observe their colleagues enduring signs of stress but fail to confront the individual of concern and alert them to the obvious. It is often very awkward and uncomfortable for a colleague to address this issue and risk rebuke or a negative outcome. As a result, they simply avoid it altogether, thus leaving the issue of concern unaddressed.

The article is paywalled here, which is a complete shame.  We need more access to self-care resources.

Sunday, June 11, 2023

Podcast: Ethics Education and the Impact of AI on Mental Health

Hi All-

I recently had the privilege of being interviewed on the Psyched To Practice podcast. During this wide-ranging and unscripted interview, Ray Christner, Paul Wagner, and I engage in an insightful discussion about ethics, ethical decision-making, morality, and the potential impact of artificial intelligence on the practice psychotherapy.

After sharing a limited biographical account of my personal journey towards becoming a clinical psychologist, we delve into various topics including ethical codes, decision science, and the significant role that morality plays in shaping the practice of clinical psychology.

While the interview has a duration of approximately one hour and 17 minutes, I recommend taking the time to listen to it, particularly if you are an early or mid-career mental health professional. The conversation offers valuable insights and perspectives that can greatly contribute to your professional growth and development.

I provide the references below to our discussion, in alphabetical order, not the order in which I spoke about it during the podcast.




Even though the podcast was not scripted, here is a reference list of ideas I addressed during the interview.

References

Baxter, R. (2023, June 8). Lawyer’s AI Blunder Shows Perils of ChatGPT in ‘Early Days.’ Bloomberg Law News. Retrieved from https://news.bloomberglaw.com/business-and-practice/lawyers-ai-blunder-shows-perils-of-chatgpt-in-early-days


Chen, J., Zhang, Y., Wang, Y., Zhang, Z., Zhang, X., & Li, J. (2023). Deep learning-guided discovery of an antibiotic targeting Acinetobacter baumannii. Nature Biotechnology, 31(6), 631-636. doi:10.1038/s41587-023-00949-7


Dillon, D, Tandon, N., Gu, Y., & Gray, K. (2023). Can AI language models replace human participants? Trends in Cognitive Sciences, in press.


Fowler, A. (2023, June 7). Artificial intelligence could help predict breast cancer risk. USA Today. Retrieved from https://www.usatoday.com/story/news/health/2023/06/07/artificial-intelligence-breast-cancer-risk-prediction/70297531007/


Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. New York, NY: Pantheon Books.


Handelsman, M. M., Gottlieb, M. C., & Knapp, S. (2005). Training ethical psychologists: An acculturation model. Professional Psychology: Research and Practice, 36(1), 59–65. https://doi.org/10.1037/0735-7028.36.1.59


Heinlein, R. A. (1961). Stranger in a strange land. New York, NY: Putnam.


Knapp, S. J., & VandeCreek, L. D. (2006). Practical ethics for psychologists: A positive approach. Washington, DC: American Psychological Association.


MacIver M. B. (2022). Consciousness and inward electromagnetic field interactions. Frontiers in human neuroscience, 16, 1032339. https://doi.org/10.3389/fnhum.2022.1032339


Persson, G., Restori, K. H., Emdrup, J. H., Schussek, S., Klausen, M. S., Nicol, M. J., Katkere, B., Rønø, B., Kirimanjeswara, G., & Sørensen, A. B. (2023). DNA immunization with in silico predicted T-cell epitopes protects against lethal SARS-CoV-2 infection in K18-hACE2 mice. Frontiers in Immunology, 14, 1166546. doi:10.3389/fimmu.2023.1166546


Schwartz, S. H. (1992). Universalism-particularism: Values in the context of cultural evolution. In M. Zanna (Ed.), Advances in experimental social psychology (Vol. 25, pp. 1-65). New York, NY: Academic Press.




Saturday, June 10, 2023

Generative AI entails a credit–blame asymmetry

Porsdam Mann, S., Earp, B. et al. (2023).
Nature Machine Intelligence.

The recent releases of large-scale language models (LLMs), including OpenAI’s ChatGPT and GPT-4, Meta’s LLaMA, and Google’s Bard have garnered substantial global attention, leading to calls for urgent community discussion of the ethical issues involved. LLMs generate text by representing and predicting statistical properties of language. Optimized for statistical patterns and linguistic form rather than for
truth or reliability, these models cannot assess the quality of the information they use.

Recent work has highlighted ethical risks that are associated with LLMs, including biases that arise from training data; environmental and socioeconomic impacts; privacy and confidentiality risks; the perpetuation of stereotypes; and the potential for deliberate or accidental misuse. We focus on a distinct set of ethical questions concerning moral responsibility—specifically blame and credit—for LLM-generated
content. We argue that different responsibility standards apply to positive and negative uses (or outputs) of LLMs and offer preliminary recommendations. These include: calls for updated guidance from policymakers that reflect this asymmetry in responsibility standards; transparency norms; technology goals; and the establishment of interactive forums for participatory debate on LLMs.‌

(cut)

Credit–blame asymmetry may lead to achievement gaps

Since the Industrial Revolution, automating technologies have made workers redundant in many industries, particularly in agriculture and manufacturing. The recent assumption25 has been that creatives
and knowledge workers would remain much less impacted by these changes in the near-to-mid-term future. Advances in LLMs challenge this premise.

How these trends will impact human workforces is a key but unresolved question. The spread of AI-based applications and tools such as LLMs will not necessarily replace human workers; it may simply
shift them to tasks that complement the functions of the AI. This may decrease opportunities for human beings to distinguish themselves or excel in workplace settings. Their future tasks may involve supervising or maintaining LLMs that produce the sorts of outputs (for example, text or recommendations) that skilled human beings were previously producing and for which they were receiving credit. Consequently, work in a world relying on LLMs might often involve ‘achievement gaps’ for human beings: good, useful outcomes will be produced, but many of them will not be achievements for which human workers and professionals can claim credit.

This may result in an odd state of affairs. If responsibility for positive and negative outcomes produced by LLMs is asymmetrical as we have suggested, humans may be justifiably held responsible for negative outcomes created, or allowed to happen, when they or their organizations make use of LLMs. At the same time, they may deserve less credit for AI-generated positive outcomes, as they may not be displaying the skills and talents needed to produce text, exerting judgment to make a recommendation, or generating other creative outputs.