Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, May 18, 2022

Tragic Choices and the Virtue of Techno-Responsibility Gaps

John Danaher
Philosophy and Technology 
35 (2):1-26 (2022)

Abstract

There is a concern that the widespread deployment of autonomous machines will open up a number of ‘responsibility gaps’ throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on ‘plugging’ or ‘dissolving’ the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace certain kinds of responsibility gap. The argument is based on the idea that human morality is often tragic. We frequently confront situations in which competing moral considerations pull in different directions and it is impossible to perfectly balance these considerations. This heightens the burden of responsibility associated with our choices. We cope with the tragedy of moral choice in different ways. Sometimes we delude ourselves into thinking the choices we make were not tragic ; sometimes we delegate the tragic choice to others ; sometimes we make the choice ourselves and bear the psychological consequences. Each of these strategies has its benefits and costs. One potential advantage of autonomous machines is that they enable a reduced cost form of delegation. However, we only gain the advantage of this reduced cost if we accept that some techno-responsibility gaps are virtuous.

Conclusion

In summary, in this article I have defended an alternative perspective on techno-responsibility gaps. Although the prevailing wisdom seems to be against such gaps, and the policy proposals tend to try to find ways to plug or dissolve such gaps, I have argued that there may be reasons to welcome them. Tragic choices — moral conflicts that leave ineliminable moral remainders — are endemic in human life and there is no easy way to deal with them. We tend to cycle between different responses: illusionism, delegation and responsibilisation. Each of these responses has its own mix of benefits and costs. None of them is perfect. That said, one potential advantage of advanced autonomous machines is that they enable a form of delegation with reduced moral and psychological costs. Thus they can shift the balance of strategies in favour of delegation and away from responsibilisation. This is only true, however, if we embrace the resultant techno-responsibility gaps. I am fully aware that this position goes against the grain and is contrary to emerging law and policy on autonomous systems. I offer it as a moderate corrective to the current consensus.  Responsibility gaps are not always a bad thing. Delegation to machines, particularly in the case of difficult tragic choices, might sometimes be a good thing.

Tuesday, May 17, 2022

Why it’s so damn hard to make AI fair and unbiased

Sigal Samuel
Vox.com
Originally posted 19 APR 2022

Here is an excerpt:

So what do big players in the tech space mean, really, when they say they care about making AI that’s fair and unbiased? Major organizations like Google, Microsoft, even the Department of Defense periodically release value statements signaling their commitment to these goals. But they tend to elide a fundamental reality: Even AI developers with the best intentions may face inherent trade-offs, where maximizing one type of fairness necessarily means sacrificing another.

The public can’t afford to ignore that conundrum. It’s a trap door beneath the technologies that are shaping our everyday lives, from lending algorithms to facial recognition. And there’s currently a policy vacuum when it comes to how companies should handle issues around fairness and bias.

“There are industries that are held accountable,” such as the pharmaceutical industry, said Timnit Gebru, a leading AI ethics researcher who was reportedly pushed out of Google in 2020 and who has since started a new institute for AI research. “Before you go to market, you have to prove to us that you don’t do X, Y, Z. There’s no such thing for these [tech] companies. So they can just put it out there.”

That makes it all the more important to understand — and potentially regulate — the algorithms that affect our lives. So let’s walk through three real-world examples to illustrate why fairness trade-offs arise, and then explore some possible solutions.

How would you decide who should get a loan?

Here’s another thought experiment. Let’s say you’re a bank officer, and part of your job is to give out loans. You use an algorithm to help you figure out whom you should loan money to, based on a predictive model — chiefly taking into account their FICO credit score — about how likely they are to repay. Most people with a FICO score above 600 get a loan; most of those below that score don’t.

One type of fairness, termed procedural fairness, would hold that an algorithm is fair if the procedure it uses to make decisions is fair. That means it would judge all applicants based on the same relevant facts, like their payment history; given the same set of facts, everyone will get the same treatment regardless of individual traits like race. By that measure, your algorithm is doing just fine.

But let’s say members of one racial group are statistically much more likely to have a FICO score above 600 and members of another are much less likely — a disparity that can have its roots in historical and policy inequities like redlining that your algorithm does nothing to take into account.

Another conception of fairness, known as distributive fairness, says that an algorithm is fair if it leads to fair outcomes. By this measure, your algorithm is failing, because its recommendations have a disparate impact on one racial group versus another.

Monday, May 16, 2022

Exploring the Association between Character Strengths and Moral Functioning

Han, H., Dawson, K. J., et al. 
(2022, April 6). PsyArXiv
https://doi.org/10.1080/10508422.2022.2063867

Abstract

We explored the relationship between 24 character strengths measured by the Global Assessment of Character Strengths (GACS), which was revised from the original VIA instrument, and moral functioning comprising postconventional moral reasoning, empathic traits and moral identity. Bayesian Model Averaging (BMA) was employed to explore the best models, which were more parsimonious than full regression models estimated through frequentist regression, predicting moral functioning indicators with the 24 candidate character strength predictors. Our exploration was conducted with a dataset collected from 666 college students at a public university in the Southern United States. Results showed that character strengths as measured by GACS partially predicted relevant moral functioning indicators. Performance evaluation results demonstrated that the best models identified by BMA performed significantly better than the full models estimated by frequentist regression in terms of AIC, BIC, and cross-validation accuracy. We discuss theoretical and methodological implications of the findings for future studies addressing character strengths and moral functioning.

From the Discussion

Although the postconventional reasoning was relatively weakly associated with character strengths, several character strengths were still significantly associated with it. We were able to discover its association with several character strengths, particularly those within the domain of intellectual ability.One possible explanation is that intellectual strengths enable people to evaluate moral issues from diverse perspectives and appreciate moral values and principles beyond existing conventions and norms (Kohlberg, 1968). Having such intellectual strengths can thus allow them to engage in sophisticated moral reasoning. For instance, wisdom, judgment, and curiosity demonstrated positive correlation with postconventional reasoning as Han (2019) proposed.  Another possible explanation is that the DIT focuses on hypothetical, abstract moral reasoning, instead of decision making in concrete situations (Rest et al., 1999b). Therefore, the emergence of positive association between intellectual strengths and postconventional moral reasoning in the current study is plausible.

The trend of positive relationships between character strengths and moral functioning indicators was also reported from best model exploration through BMA.  First, postconventional reasoning was best predicted by intellectual strengths, curiosity, and wisdom, plus kindness. Second, EC was positively predicted by love, kindness, and gratitude. Third, PT was positively associated with wisdom and gratitude in the best model. Fourth, moral internalization was positively predicted by kindness and gratitude.

Sunday, May 15, 2022

A False Sense of Security: Rapid Improvement as a Red Flag for Death by Suicide

Rufino, K., Beyene, H., et al.
Journal of Consulting and Clinical Psychology. 
Advance online publication.

Objective: 
Postdischarge from inpatient psychiatry is the highest risk period for suicide, thus better understanding the predictors of death by suicide during this time is critical for improving mortality rates after inpatient psychiatric treatment. As such, we sought to determine whether there were predictable patterns in suicide ideation in hospitalized psychiatric patients. 

Method: 
We examined a sample of 2,970 adult’s ages 18–87 admitted to an extended length of stay (LOS) inpatient psychiatric hospital. We used group-based trajectory modeling via the SAS macro PROC TRAJ to quantitatively determine four suicide ideation groups: nonresponders (i.e., high suicide ideation throughout treatment), responders (i.e., steady improvement in suicide ideation across treatment), resolvers (i.e., rapid improvement in suicide ideation across treatment), and no-suicide ideation (i.e., never significant suicide ideation in treatment). Next, we compared groups to clinical and suicide-specific outcomes, including death by suicide. 

Results: 
Resolvers were the most likely to die by suicide postdischarge relative to all other suicide ideation groups. Resolvers also demonstrated significant improvement in all clinical outcomes from admission to discharge. 

Conclusion: 
There are essential inpatient psychiatry clinical implications from this work, including that clinical providers should not be lulled into a false sense of security when hospitalized adults rapidly improve in terms of suicide ideation. Instead, inpatient psychiatric treatment teams should increase caution regarding the patient’s risk level and postdischarge treatment planning.

Impact Statement

As postdischarge from inpatient psychiatry is the highest risk period for suicide, better understanding the predictors of death by suicide during this time is critical for improving mortality rates after inpatient psychiatric treatment. Clinical providers should not be lulled into a false sense of security when hospitalized adults rapidly improve in terms of suicide ideation, instead, increasing vigilance regarding the patient’s risk level and postdischarge treatment planning. 

Saturday, May 14, 2022

Suicides of Psychologists and Other Health Professionals: National Violent Death Reporting System Data, 2003–2018

Li, T., Petrik, M. L., Freese, R. L., & Robiner, W. N.
(2022). American Psychologist. 
Advance online publication.

Abstract

Suicide is a prevalent problem among health professionals, with suicide rates often described as exceeding that of the general population. The literature addressing suicide of psychologists is limited, including its epidemiological estimates. This study explored suicide rates in psychologists by examining the National Violent Death Reporting System (NVDRS), the Centers for Disease Control and Prevention’s data set of U.S. violent deaths. Data were examined from participating states from 2003 to 2018. Trends in suicide deaths longitudinally were examined. Suicide decedents were characterized by examining demographics, region of residence, method of suicide, mental health, suicidal ideation, and suicidal behavior histories. Psychologists’ suicide rates are compared to those of other health professionals. Since its inception, the NVDRS identified 159 cases of psychologist suicide. Males comprised 64% of decedents. Average age was 56.3 years. Factors, circumstances, and trends related to psychologist suicides are presented. In 2018, psychologist suicide deaths were estimated to account for 4.9% of suicides among 10 selected health professions. As the NVDRS expands to include data from all 50 states, it will become increasingly valuable in delineating the epidemiology of suicide for psychologists and other health professionals and designing prevention strategies. 

From the Discussion

Between 2003 and 2018, 159 cases of psychologist death by suicide were identified in the NVDRS, providing a basis for examining the phenomenon rather than clarifying its true incidence. Suicide deaths spanned all U.S. regions, with the South accounting for the most (35.8%) cases, followed by the West (24.5%), Midwest (20.1%), and Northeast (19.5%). It is unclear whether this is due to the South and West actually having higher suicide rates among psychologists or if these regions have greater representation due to inclusion of more reporting states. It should also be noted that these regions make up different proportions of the population for the entire United States. According to the U.S. Census Bureau (n.d.), the proportion of each region’s population as compared to the entire U.S. population for the year 2019 was South (38.3%), West (23.9%), Midwest (20.8%), and Northeast (17.1%). This could have affected the number of cases seen within each region, as could other factors, such as the trend for gun ownership to be more than twice as common in the South than in the Northeast (Pew Research Center, 2017). The 2003–2018 psychologist suicide deaths were more than 13 times higher than NVDRS-identified psychologist homicide deaths (n = 12) for that same period (Robiner & Li, 2022).

The number of psychologist suicides identified in the NVDRS generally increased longitudinally. It is not clear whether this might signal an actual increasing incidence, and if so what factors may be contributing, or how much it is an artifact of the increasing number of NVDRS-reporting states. Starting in 2020, the data will more clearly reveal temporal patterns, with variation reflecting changes in suicide incidence rather than how many states reported. In the future, we anticipate longitudinal trends will not be confounded by variation in the number of reporting states.

Most psychologist suicide decedents were White (92.5%). Smaller percentages were Black, Indigenous, and People of Color (BIPOC): Black (2.5%), Asian or Pacific Islander (1.9%), and two or more races (3.1%). These proportions align largely with the racial/ethnic makeup of the psychologist workforce in APA’s Survey of Psychology Health Service Providers for White (87.8%), Black (2.6%), Asian (2.5%), and multiracial/multiethnic psychologists (1.7%; Hamp et al., 2016). The data are generally consistent with earlier findings of psychologist suicide (Phillips, 1999) that most psychologist suicide decedents are White and reveal slightly greater diversification within the field. CDC data from 2019 reveals rates in the general population of suicide per 100,000 are greatest in Whites (29.8 male, 8 female), followed by Blacks (12.4 male, 2.9 female), Asians (11.2 male, 4.0 female), and Hispanics (11.3 male, 3.0 female; NIMH, 2021). There were no cases of Hispanic psychologist suicide in this sample, which is generally consistent with the relatively lower numbers of suicides reported for Hispanics by the CDC. The relatively small numbers of suicides within subgroups limit the certainty of inferences that can be drawn about the association of ethnicity, and potentially other demographics, and suicide incidence. As the demographic composition of the field diversifies, the durability of the present findings for subgroups remains to be seen.

Friday, May 13, 2022

How Other- and Self-Compassion Reduce Burnout through Resource Replenishment

Kira Schabram and Yu Tse Heng
Academy of Management Journal, Vol. 65, No. 2

Abstract

The average employee feels burnt out, a multidimensional state of depletion likely to persist without intervention. In this paper, we consider compassion as an agentic action by which employees may replenish their own depleted resources and thereby recover. We draw on conservation of resources theory to examine the resource-generating power of two distinct expressions of compassion (self- and other-directed) on three dimensions of burnout (exhaustion, cynicism, inefficacy). Utilizing two complementary designs—a longitudinal field survey of 130 social service providers and an experiential sampling methodology with 100 business students across 10 days—we find a complex pattern of results indicating that both compassion expressions have the potential to generate salutogenic resources (self-control, belonging, self-esteem) that replenish different dimensions of burnout. Specifically, self-compassion remedies exhaustion and other-compassion remedies cynicism—directly or indirectly through resources—while the effects of self- and other-compassion on inefficacy vary. Our key takeaway is that compassion can indeed contribute to human sustainability in organizations, but only when the type of compassion provided generates resources that fit the idiosyncratic experience of burnout.

From the Discussion Section

Our work suggests a more immediate benefit, namely that giving compassion can serve an important resource generative function for the self. Indeed, in neither of our studies did we find either compassion expression to ever have a deleterious effect. While this is in line with the broader literature on self-compassion (Neff, 2011), it is somewhat surprising when it comes to other-compassion. Hobfoll (1989) speculated that when people find themselves depleted, giving support to others should sap them further and such personal costs have been identified in previously cited research on prosocial gestures (Bolino & Grant, 2016; Lanaj et al., 2016; Uy et al., 2017). Why then did other-compassion serve a singularly restorative function? As we noted in our literature review, compassion is distinguished among the family of prosocial behaviors by its principal attendance to human needs (Tsui, 2013) rather than organizational effectiveness, and this may offer an explanation. Perhaps, there is something fundamentally more beneficial for actors about engaging in acts of kindness and care (e.g. taking someone who is having a hard time out for coffee) than in providing instrumental support (e.g. exerting oneself to provide a friendly review). We further note that our study also did not find any evidence of ‘compassion fatigue’ (Figley, 2013), identified frequently by practitioners among the social service employees that comprised our first sample. In line with the ‘desperation corollary’ of COR (Hobfoll et al., 2018), which suggests that individuals can reach a state of extreme depletion characterized by maladaptive coping, it may be that there exists a tipping point after which compassion ceases to offer benefits. If there is, however, it must be quite high to not have registered in either the longitudinal or diary designs. 

Thursday, May 12, 2022

Human Vision Reconstructs Time to Satisfy Causal Constraints

Bechlivanidis, C., Buehner, M. J., et al.
(2022).
Psychological Science, 33(2), 224–235.

Abstract

The goal of perception is to infer the most plausible source of sensory stimulation. Unisensory perception of temporal order, however, appears to require no inference, because the order of events can be uniquely determined from the order in which sensory signals arrive. Here, we demonstrate a novel perceptual illusion that casts doubt on this intuition: In three experiments (N = 607), the experienced event timings were determined by causality in real time. Adult participants viewed a simple three-item sequence, ACB, which is typically remembered as ABC in line with principles of causality. When asked to indicate the time at which events B and C occurred, participants’ points of subjective simultaneity shifted so that the assumed cause B appeared earlier and the assumed effect C later, despite participants’ full attention and repeated viewings. This first demonstration of causality reversing perceived temporal order cannot be explained by postperceptual distortion, lapsed attention, or saccades.

Statement of Relevance

There are two sources of information on the temporal order of events: the order in which we experience them and their causal relationships, because causes precede their effects. Intuitively, direct experience of order is far more dependable than causal inference. Here, we showed participants events that looked like collisions, but the collided-on object started moving before the collision occurred. Surprisingly, participants indicated in real time that they saw events happening significantly earlier or later than they actually did, at timings compatible with causal interpretations (as if there were indeed a collision). This is evidence that perceived order is not the passive registration of the sequence of signals arriving at the observer but an active interpretation informed by rich assumptions.

General Discussion

Collectively, our findings constitute the first demonstration of a unisensory perceptual illusion of temporal order induced by causal impressions, indicating that the visual system generates the experienced order through a process of interpretation (Grush, 2016; Holcombe, 2015).  Participants were given precise instructions and sufficient time to repeatedly view the sequences, they attended to the critical events using the same modality, and they synchronized object motion with a nonlocalized flash.  We can thus confidently rule out alternative explanations based on inattentional blindness, multimodal integration, flash lag, and motion aftereffects. Because stimulus presentation was free and unconstrained relative to the time of saccades, our results cannot be accounted for by transient perisaccadic mislocalization, either (Kresevic et al., 2016; Morrone et al., 2005). Although in this case we examined the effect only with an adult population recruited from a crowdsourcing platform, previous research suggests that children as young as 4 years old are also susceptible to causal reordering, at least when asked to make post hoc reports (Tecwyn et al., 2020).  More research needs to be carried out to study the degree of perceptual shift and, more broadly, the generalizability of the current results.

Wednesday, May 11, 2022

Bias in mental health diagnosis gets in the way of treatment

Howard N. Garb
psyche.co
Originally posted 2 MAR 22

Here is an excerpt:

What about race-related bias? 

Research conducted in the US indicates that race bias is a serious problem for the diagnosis of adult mental disorders – including for the diagnosis of PTSD, depression and schizophrenia. Preliminary data also suggest that eating disorders are underdiagnosed in Black teens compared with white and Hispanic teens.

The misdiagnosis of PTSD can have significant economic consequences, in addition to its implications for treatment. In order for a US military veteran to receive disability compensation for PTSD from the Veterans Benefits Administration, a clinician has to diagnose the veteran. To learn if race bias is present in this process, a research team compared its own systematic diagnoses of veterans with diagnoses made by clinicians during disability exams. Though most clinicians will make accurate diagnoses, the research diagnoses can be considered more accurate, as the mental health professionals who made them were trained to adhere to diagnostic criteria and use extensive information. When veterans received a research diagnosis of PTSD, they should have also gotten a clinician’s diagnosis of PTSD – but this occurred only about 70 per cent of the time.

More troubling is that, in cases where research diagnoses of PTSD were made, Black veterans were less likely than white veterans to receive a clinician’s diagnosis of PTSD during their disability exams. There was one set of cases where bias was not evident, however. In roughly 25 per cent of the evaluations, clinicians administered a formal PTSD symptom checklist or a psychological test to help them make a diagnosis – and if this additional information was collected, race bias was not observed. This is an important finding. Clinicians will sometimes form a first impression of a patient’s condition and then ask questions that can confirm – but not refute – their subjective impression. By obtaining good-quality objective information, clinicians might be less inclined to depend on their subjective impressions alone.

Race bias has also been found for other forms of mental illness. Historically, research indicated that Black patients and sometimes Hispanic patients were more likely than white patients to be given incorrect diagnoses of schizophrenia, while white patients were more often given correct diagnoses of major depression and bipolar disorder. During the past 20 years, this appears to have changed somewhat, with the most accurate diagnoses being made for Latino patients, the least accurate for Black patients, and the results for white patients somewhere in between.

Tuesday, May 10, 2022

Consciousness Semanticism: A Precise Eliminativist Theory of Consciousness

Anthis, J.R. (2022). 
In: Klimov, V.V., Kelley, D.J. (eds) Biologically 
Inspired Cognitive Architectures 2021. BICA 2021. 
Studies in Computational Intelligence, vol 1032. 
Springer, Cham. 
https://doi.org/10.1007/978-3-030-96993-6_3

Abstract

Many philosophers and scientists claim that there is a ‘hard problem of consciousness’, that qualia, phenomenology, or subjective experience cannot be fully understood with reductive methods of neuroscience and psychology, and that there is a fact of the matter as to ‘what it is like’ to be conscious and which entities are conscious. Eliminativism and related views such as illusionism argue against this. They claim that consciousness does not exist in the ways implied by everyday or scholarly language. However, this debate has largely consisted of each side jousting analogies and intuitions against the other. Both sides remain unconvinced. To break through this impasse, I present consciousness semanticism, a novel eliminativist theory that sidesteps analogy and intuition. Instead, it is based on a direct, formal argument drawing from the tension between the vague semantics in definitions of consciousness such as ‘what it is like’ to be an entity and the precise meaning implied by questions such as, ‘Is this entity conscious?’ I argue that semanticism naturally extends to erode realist notions of other philosophical concepts, such as morality and free will. Formal argumentation from precise semantics exposes these as pseudo-problems and eliminates their apparent mysteriousness and intractability.

From Implications and Concluding Remarks

Perhaps even more importantly, humanity seems to be rapidly developing the capacity to create vastly more intelligent beings than currently exist. Scientists and engineers have already built artificial intelligences from chess bots to sex bots.  Some projects are already aimed at the organic creation of intelligence, growing increasingly large sections of human brains in the laboratory. Such minds could have something we want to call consciousness, and they could exist in astronomically large numbers. Consider if creating a new conscious being becomes as easy as copying and pasting a computer program or building a new robot in a factory. How will we determine when these creations become conscious or sentient?  When do they deserve legal protection or rights? These are important motivators for the study of consciousness, particularly for the attempt to escape the intellectual quagmire that may have grown from notions such as the ‘hard problem’ and ‘problem of other minds’. Andreotta (2020) argues that the project of ‘AI rights’,  including artificial intelligences in the moral circle, is ‘beset by an epistemic problem that threatens to impede its progress—namely, a lack of a solution to the “Hard Problem” of consciousness’. While the extent of the impediment is unclear, a resolution of the ‘hard problem’ such as the one I have presented could make it easier to extend moral concern to artificial intelligences.