Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, December 31, 2022

AI’s fairness problem: understanding wrongful discrimination in the context of automated decision-making

Cossette-Lefebvre, H., Maclure, J. 
AI Ethics (2022).
https://doi.org/10.1007/s43681-022-00233-w

Abstract

The use of predictive machine learning algorithms is increasingly common to guide or even take decisions in both public and private settings. Their use is touted by some as a potentially useful method to avoid discriminatory decisions since they are, allegedly, neutral, objective, and can be evaluated in ways no human decisions can. By (fully or partly) outsourcing a decision process to an algorithm, it should allow human organizations to clearly define the parameters of the decision and to, in principle, remove human biases. Yet, in practice, the use of algorithms can still be the source of wrongful discriminatory decisions based on at least three of their features: the data-mining process and the categorizations they rely on can reconduct human biases, their automaticity and predictive design can lead them to rely on wrongful generalizations, and their opaque nature is at odds with democratic requirements. We highlight that the two latter aspects of algorithms and their significance for discrimination are too often overlooked in contemporary literature. Though these problems are not all insurmountable, we argue that it is necessary to clearly define the conditions under which a machine learning decision tool can be used. We identify and propose three main guidelines to properly constrain the deployment of machine learning algorithms in society: algorithms should be vetted to ensure that they do not unduly affect historically marginalized groups; they should not systematically override or replace human decision-making processes; and the decision reached using an algorithm should always be explainable and justifiable.

From the Conclusion

Given that ML algorithms are potentially harmful because they can compound and reproduce social inequalities, and that they rely on generalization disregarding individual autonomy, then their use should be strictly regulated. Yet, these potential problems do not necessarily entail that ML algorithms should never be used, at least from the perspective of anti-discrimination law. Rather, these points lead to the conclusion that their use should be carefully and strictly regulated. However, before identifying the principles which could guide regulation, it is important to highlight two things. First, the context and potential impact associated with the use of a particular algorithm should be considered. Anti-discrimination laws do not aim to protect from any instances of differential treatment or impact, but rather to protect and balance the rights of implicated parties when they conflict [18, 19]. Hence, using ML algorithms in situations where no rights are threatened would presumably be either acceptable or, at least, beyond the purview of anti-discriminatory regulations.

Second, one also needs to take into account how the algorithm is used and what place it occupies in the decision-making process. More precisely, it is clear from what was argued above that fully automated decisions, where a ML algorithm makes decisions with minimal or no human intervention in ethically high stakes situation—i.e., where individual rights are potentially threatened—are presumably illegitimate because they fail to treat individuals as separate and unique moral agents. To avoid objectionable generalization and to respect our democratic obligations towards each other, a human agent should make the final decision—in a meaningful way which goes beyond rubber-stamping—or a human agent should at least be in position to explain and justify the decision if a person affected by it asks for a revision. If this does not necessarily preclude the use of ML algorithms, it suggests that their use should be inscribed in a larger, human-centric, democratic process.

Friday, December 30, 2022

A new control problem? Humanoid robots, artificial intelligence, and the value of control

Nyholm, S. 
AI Ethics (2022).
https://doi.org/10.1007/s43681-022-00231-y

Abstract

The control problem related to robots and AI usually discussed is that we might lose control over advanced technologies. When authors like Nick Bostrom and Stuart Russell discuss this control problem, they write in a way that suggests that having as much control as possible is good while losing control is bad. In life in general, however, not all forms of control are unambiguously positive and unproblematic. Some forms—e.g. control over other persons—are ethically problematic. Other forms of control are positive, and perhaps even intrinsically good. For example, one form of control that many philosophers have argued is intrinsically good and a virtue is self-control. In this paper, I relate these questions about control and its value to different forms of robots and AI more generally. I argue that the more robots are made to resemble human beings, the more problematic it becomes—at least symbolically speaking—to want to exercise full control over these robots. After all, it is unethical for one human being to want to fully control another human being. Accordingly, it might be seen as problematic—viz. as representing something intrinsically bad—to want to create humanoid robots that we exercise complete control over. In contrast, if there are forms of AI such that control over them can be seen as a form of self-control, then this might be seen as a virtuous form of control. The “new control problem”, as I call it, is the question of under what circumstances retaining and exercising complete control over robots and AI is unambiguously ethically good.

From the Concluding Discussion section

Self-control is often valued as good in itself or as an aspect of things that are good in themselves, such as virtue, personal autonomy, and human dignity. In contrast, control over other persons is often seen as wrong and bad in itself. This means, I have argued, that if control over AI can sometimes be seen or conceptualized as a form of self-control, then control over AI can sometimes be not only instrumentally good, but in certain respects also good as an end in itself. It can be a form of extended self-control, and therefore a form of virtue, personal autonomy, or even human dignity.

In contrast, if there will ever be any AI systems that could properly be regarded as moral persons, then it would be ethically problematic to wish to be in full control over them, since it is ethically problematic to want to be in complete control over a moral person. But even before that, it might still be morally problematic to want to be in complete control over certain AI systems; it might be problematic if they are designed to look and behave like human beings. There can be, I have suggested, something symbolically problematic about wanting to be in complete control over an entity that symbolizes or represents something—viz. a human being—that it would be morally wrong and in itself bad to try to completely control.

For these reasons, I suggest that it will usually be a better idea to try to develop AI systems that can sensibly be interpreted as extensions of our own agency while avoiding developing robots that can be, imitate, or represent moral persons. One might ask, though, whether the two possibilities can ever come together, so to speak.

Think, for example, of the robotic copy that the Japanese robotics researcher Hiroshi Ishiguro has created of himself. It is an interesting question whether the agency of this robot could be seen as an extension of Ishiguro’s agency. The robot certainly represents or symbolizes Ishiguro. So, if he has control over this robot, then perhaps this can be seen as a form of extended agency and extended self-control. While it might seem symbolically problematic if Ishiguro wants to have complete control over the robot Erica that he has created, which looks like a human woman, it might not be problematic in the same way if he wants to have complete control over the robotic replica that he has created of himself. At least it would be different in terms of what it can be taken to symbolize or represent.

Thursday, December 29, 2022

Parents’ Political Ideology Predicts How Their Children Punish

Leshin, R. A., Yudkin, D. A., Van Bavel, J. J., 
Kunkel, L., & Rhodes, M. (2022). 
Psychological Science
https://doi.org/10.1177/09567976221117154

Abstract

From an early age, children are willing to pay a personal cost to punish others for violations that do not affect them directly. Various motivations underlie such “costly punishment”: People may punish to enforce cooperative norms (amplifying punishment of in-groups) or to express anger at perpetrators (amplifying punishment of out-groups). Thus, group-related values and attitudes (e.g., how much one values fairness or feels out-group hostility) likely shape the development of group-related punishment. The present experiments (N = 269, ages 3−8 from across the United States) tested whether children’s punishment varies according to their parents’ political ideology—a possible proxy for the value systems transmitted to children intergenerationally. As hypothesized, parents’ self-reported political ideology predicted variation in the punishment behavior of their children. Specifically, parental conservatism was associated with children’s punishment of out-group members, and parental liberalism was associated with children’s punishment of in-group members. These findings demonstrate how differences in group-related ideologies shape punishment across generations.

Conclusion

The present findings suggest that political ideology shapes punishment across development. Counter to previous findings among adults (King & Maruga, 2009), parental conservatism (vs. liberalism) was not related to increased punishment overall. And counter to previous developmental research on belief transmission (Gelman et al., 2004), our patterns did not strengthen with age. Rather, we found that across development, the link between ideology and punishment hinged on group membership. Parental conservatism was associated with children’s punishment of out-groups, whereas parental liberalism was associated with children’s punishment of in-groups. Our findings add rich insights to our understanding of how costly punishment functions in group contexts and provide new evidence of the powerful transmission of belief systems across generations.

Wednesday, December 28, 2022

Physician-assisted suicide is not protected by Massachusetts Constitution, top state court rules

Chris Van Buskirk
masslive.com
Originally posted 6 Dec 22

The state’s highest court ruled Monday morning that the Massachusetts state constitution does not protect physician-assisted suicide and that laws around manslaughter may prohibit the practice.

The decision affects whether doctors can prescribe lethal amounts of medication to terminally ill patients that would end their life. The plaintiffs, a doctor looking to provide physician-assisted suicide and a patient with an incurable cancer, argued that patients with six months or less to live have a constitutional right to bring about their death on their own terms.

But defendants in the case have said that the decision to legalize or formalize the procedure here in Massachusetts is a question best left to state lawmakers, not the courts. And in an 89-page ruling, Associate Justice Frank Gaziano wrote that the Supreme Judicial Court agreed with that position.

The court, he wrote, recognized the “paramount importance and profound significance of all end-of-life decisions” but that the Massachusetts Declaration of Rights does not reach so far as to protect physician-assisted suicide.

“Our decision today does not diminish the critical nature of these interests, but rather recognizes the limits of our Constitution, and the proper role of the judiciary in a functioning democracy. The desirability and practicality of physician-assisted suicide raises not only weighty philosophical questions about the nature of life and death, but also difficult technical questions about the regulation of the medical field,” Gaziano wrote. “These questions are best left to the democratic process, where their resolution can be informed by robust public debate and thoughtful research by experts in the field.”

Plaintiff Roger Kligler, a retired physician, was diagnosed with stage four metastatic prostate cancer, and in May 2018, a doctor told him that there was a fifty percent chance that he would die within five years.

Kligler, Gaziano wrote in the ruling, had not yet received a six-month prognosis, and his cancer “currently has been contained, and his physician asserts that it would not be surprising if Kligler were alive ten years from now.”

Tuesday, December 27, 2022

Are Illiberal Acts Unethical? APA’s Ethics Code and the Protection of Free Speech

O'Donohue, W., & Fisher, J. E. (2022). 
American Psychologist, 77(8), 875–886.
https://doi.org/10.1037/amp0000995

Abstract

The American Psychological Association’s (APA’s) Ethical Principles of Psychologists and Code of Conduct (American Psychological Association, 2017b; hereinafter referred to as the Ethics Code) does not contain an enforceable standard regarding psychologists’ role in either honoring or protecting the free speech of others, or ensuring that their own free speech is protected, including an important corollary of free speech, the protection of academic freedom. Illiberal acts illegitimately restrict civil liberties. We argue that the ethics of illiberal acts have not been adequately scrutinized in the Ethics Code. Psychologists require free speech to properly enact their roles as scientists as well as professionals who wish to advocate for their clients and students to enhance social justice. This article delineates criteria for what ought to be included in the Ethics Code, argues that ethical issues regarding the protection of free speech rights meet these criteria, and proposes language to be added to the Ethics Code.

Impact Statement

Freedom of speech is a fundamental civil right and currently has come under threat. Psychologists can only perform their duties as scientists, educators, or practitioners if they are not censored or fear censorship. The American Psychological Association’s (APA’s) Ethics Code contains no enforceable ethical standard to protect freedom of speech for psychologists. This article examines the ethics of free speech and argues for amending the APA Ethics Code to more clearly delineate psychologists’ rights and duties regarding free speech. This article argues that such protection is an ethical matter and for specific language to be included in the Ethics Code.

Conclusions

Free speech is central not only within the political sphere but also for the proper functioning of scholars and educators. Unfortunately, the ethics of free speech are not properly explicated in the current version of the American Psychological Association’s Ethics Code and this is particularly concerning given data that indicate a waning appreciation and protection of free speech in a variety of contexts. This article argues for fulsome protection of free speech rights by the inclusion of a clear and well-articulated statement in the Ethics Code of the psychologist’s duties related to free speech. Psychologists are committed to social justice and there can be no social justice without free speech.

Monday, December 26, 2022

Is loneliness in emerging adults increasing over time? A preregistered cross-temporal meta-analysis and systematic review

Buecker, S., Mund, M., Chwastek, S., Sostmann, M.,
& Luhmann, M. (2021). 
Psychological Bulletin, 147(8), 787–805.

Abstract

Judged by the sheer amount of global media coverage, loneliness rates seem to be an increasingly urgent societal concern. From the late 1970s onward, the life experiences of emerging adults have been changing massively due to societal developments such as increased fragmentation of social relationships, greater mobility opportunities, and changes in communication due to technological innovations. These societal developments might have coincided with an increase in loneliness in emerging adults. In the present preregistered cross-temporal meta-analysis, we examined whether loneliness levels in emerging adults have changed over the last 43 years. Our analysis is based on 449 means from 345 studies with 437 independent samples and a total of 124,855 emerging adults who completed the University of California Los Angeles (UCLA) Loneliness Scale between 1976 and 2019. Averaged across all studies, loneliness levels linearly increased with increasing calendar years (β = .224, 95% CI [.138, .309]). This increase corresponds to 0.56 standard deviations on the UCLA Loneliness Scale over the 43-year studied period. Overall, the results imply that loneliness can be a rising concern in emerging adulthood. Although the frequently used term “loneliness epidemic” seems exaggerated, emerging adults should therefore not be overlooked when designing interventions against loneliness.

Impact Statement

Public Significance Statement—The present cross-temporal meta-analysis suggests that loneliness in emerging adults slightly increased over historical time from 1976 until 2019. Consequently, emerging adults should not be overlooked when designing future interventions or public health campaigns against loneliness.

From the Discussion Section

Contrary to the idea that loneliness has sharply increased since smartphones gained market saturation (in about 2012; Twenge et al., 2018), our data showed that loneliness in emerging adults remained relatively stable since 2012 but gradually increased when looking at longer periods (i.e., from 1976 until 2019). It, therefore, seems unlikely that the increased smartphone use has led to increases in emerging adults’ loneliness. However, other societal developments since the late 1970s, such as greater mobility and fragmentation of social networks, may explain increases in emerging adults’ loneliness over historical time. Since our meta-analysis cannot provide information on other age  groups such as children and  adolescents,  the  role  of  smartphone  use  on  loneliness  could  be different in other age groups. 

Sunday, December 25, 2022

Belief in karma is associated with perceived (but not actual) trustworthiness

H.H. Ong, A.M. Evans, et al.
Judgment and Decision Making, Vol. ‍17,
No. ‍2, March 2022, pp. 362-377

Abstract

Believers of karma believe in ethical causation where good and bad outcomes can be traced to past moral and immoral acts. Karmic belief may have important interpersonal consequences. We investigated whether American Christians expect more trustworthiness from (and are more likely to trust) interaction partners who believe in karma. We conducted an incentivized study of the trust game where interaction partners had different beliefs in karma and God. Participants expected more trustworthiness from (and were more likely to trust) karma believers. Expectations did not match actual behavior: karmic belief was not associated with actual trustworthiness. These findings suggest that people may use others' karmic belief as a cue to predict their trustworthiness but would err when doing so.

From the Discussion Section

We asked whether people perceive individuals who believe in karma, compared with those who do not, to be more trustworthy. In an incentivized study of American Christians, we found evidence that this was indeed the case. People expected interaction partners who believed in karma to behave in a more trustworthy manner and trusted these individuals more. Additionally, this tendency did not differ across the perceiver’s belief in karma.

While perceivers expected individuals who believed in karma to be more trustworthy, the individuals’ actual trustworthy behavior did not differ across their belief in karma. This discrepancy indicates that, although participants in our study used karmic belief as a cue when making trustworthiness judgment, it did not track actual trustworthiness. The absence of an association between karmic belief and actual trustworthy behavior among participants in the trustee role may seem to contradict prior research which found that reminders of karma increased generous behavior in dictator games (White et al., 2019; Willard et al., 2020). However, note that our study did not involve any conspicuous reminders of karma – there was only a single question asking if participants believe in karma. Thus, it may be that those who believe in karma would behave in a more trustworthy manner only when the concept is made salient.

Although we had found that karma believers were perceived as more trustworthy, the psychological explanation(s) for this finding remains an open question. One possible explanation is that karma is seen as a source of supernatural justice and that individuals who believe in karma are expected to behave in a more trustworthy manner in order to avoid karmic ]punishment and/or to reap karmic rewards. 


Saturday, December 24, 2022

How Stable are Moral Judgments?

Rehren, P., Sinnott-Armstrong, W.
Rev. Phil.Psych. (2022).
https://doi.org/10.1007/s13164-022-00649-7

Abstract

Psychologists and philosophers often work hand in hand to investigate many aspects of moral cognition. In this paper, we want to highlight one aspect that to date has been relatively neglected: the stability of moral judgment over time. After explaining why philosophers and psychologists should consider stability and then surveying previous research, we will present the results of an original three-wave longitudinal study. We asked participants to make judgments about the same acts in a series of sacrificial dilemmas three times, 6–8 days apart. In addition to investigating the stability of our participants’ ratings over time, we also explored some potential explanations for instability. To end, we will discuss these and other potential psychological sources of moral stability (or instability) and highlight possible philosophical implications of our findings.

From the General Discussion

We have argued that the stability of moral judgments over time is an important feature of moral cognition for philosophers and psychologists to consider. Next, we presented an original empirical study into the stability over 6–8 days of moral judgments about acts in sacrificial dilemmas. Like Helzer et al. (2017, Study 1), we found an overall test-retest correlation of 0.66. Moreover, we observed moderate to large proportions of rating shifts, and small to moderate proportions of rating revisions (M = 14%), rejections (M = 5%) and adoptions (M = 6%)—that is, the participants in question judged p in one wave, but did not judge p in the other wave.

What Explains Instability?

One potential explanation of our results is that they are not a genuine feature of moral judgments about sacrificial dilemmas, but instead are due to measurement error. Measurement error is the difference between the observed and the true value of a variable. So, it may be that most of the rating changes we observed do not mean that many real-life moral judgments about acts in sacrificial dilemmas are (or would be) unstable over short periods of time. Instead, it may be that when people make moral judgments about sacrificial dilemmas in real life, their judgments remain very stable from one week to the next, but our study (perhaps any study) was not able to capture this stability.

To the extent that real-life moral judgment is what moral psychologists and philosophers are interested in, this may suggest a problem with the type of study design used in this and many other papers. If there is enough measurement error, then it may be very difficult to draw firm conclusions about real-life moral judgments from this research. Other researchers have raised related objections. Most forcefully, Bauman et al. (2014) have argued that participants often do not take the judgment tasks used by moral psychologists seriously enough for them to engage with these tasks in anything like the way they would if they came across the same tasks in the real world (also, see, Ryazanov et al. 2018). In our view, moral psychologists would do well to more frequently move their studies outside of the (online) lab and into the real world (e.g., Bollich et al. 2016; Hofmann et al. 2014).

(cut)

Instead, our findings may tell us something about a genuine feature of real-life moral judgment. If so, then a natural question to ask is what makes moral judgments unstable (or stable) over time. In this paper, we have looked at three possible explanations, but we did not find evidence for them. First, because sacrificial dilemmas are in a certain sense designed to be difficult, moral judgments about acts in these scenarios may give rise to much more instability than moral judgments about other scenarios or statements. However, when we compared our test-retest correlations with a sampling of test-retest correlations from instruments involving other moral judgments, sacrificial dilemmas did not stand out. Second, we did not find evidence that moral judgment changes occur because people are more confident in their moral judgments the second time around. Third, Study 1b did not find evidence that rating changes, when they occurred, were often due to changes in light of reasons and reflection. Note that this does not mean that we can rule out any of these potential explanations for unstable moral judgments completely. As we point out below, our research is limited in the extent to which it could test each of these explanations, and so one or more of them may still have been the cause for some proportion of the rating changes we observed.

Friday, December 23, 2022

One thought too few: Why we punish negligence

Sarin, A., & Cushman, F. A. (2022, November 7).
https://doi.org/10.31234/osf.io/mj769

Abstract

Why do we punish negligence? Leading accounts explain away the punishment of negligence as a consequence of other, well-known phenomena: outcome bias, character inference, or the volitional choice not to exercise due care. Although they capture many important cases, these explanations fail to account for others. We argue that, in addition to these phenomena, there is something both fundamental and unique to the punishment of negligence itself: People hold others directly responsible for the basic fact of failing to bring to mind information that would help them to avoid important risks. In other words, we propose that at its heart negligence is a failure of thought. Drawing on the current literature in moral psychology, we suggest that people find it natural to punish such failures, even when they don’t arise from conscious, volitional choice. Then, drawing on the literature on how thoughts come to mind, we argue that punishing a person for forgetting will help them remember in the future. This provides new insight on the structure and function of our tendency to punish negligent actions.

Conclusion

Why do we punish negligence? Psychologists and philosophers have traditionally offered two answers: Outcome bias (a punitive response elicited by the harm caused) and lack of due care (a punitive response elicited by the antecedent intentional choices that made negligence possible). These factors doubtlessly contribute in many cases, and they align well with psychological models that  posit  causation  and  intention  as  the  primary  determinants of punishment (Cushman, 2008; Laurent et al., 2016; Nobes et al., 2009; Shultz et al., 1986). Another potential explanation, rooted in character-based models of moral  judgment (Gray et al., 2012; Malle, 2011; A. Smith, 2017; Sripada, 2016; Uhlmann et al., 2015), is that  negligence speaks to an insufficient concern for others.

These models each attempt to “explain away” negligence as an outgrowth of other, better-understood parts of our moral psychology. We have argued, however, that there is something both fundamental and unique to negligence itself: That people simply hold others responsible for the basic fact of forgetting(or, more broadly, failing to call mind) things that would have made them act better.  In other words, at its heart, negligence is a failure of thought–a failure to make relevant dispositional knowledge occurrent at the right time.

Our challenge, then,  is to explain the design principles behind this mechanism of moral judgment. If we hold people directly responsible for their failures of thought, what purpose does this serve? To address this question, we draw on the literature on how thoughts come to mind.  It offers a model both of how negligence occurs, and why punishing such involuntary forgetting is adaptive. Value determines which  actions, outcomes, and pieces of knowledge come to mind. Specifically, actions come to mind when they have high value, outcomes when they have high absolute value, and other sorts of knowledge structures when they contribute in valuable ways to the task at hand. After an action is chosen and executed, a person receives various kinds of positive and negative feedback –environmental, social, and internal. All kinds of feedback alter value –of actions, outcomes, and other knowledge structures.  Value and feedback therefore form a self-reinforcing loop: value determines what comes to mind and feedback (rewards and punishments) update value.