Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, December 27, 2022

Are Illiberal Acts Unethical? APA’s Ethics Code and the Protection of Free Speech

O'Donohue, W., & Fisher, J. E. (2022). 
American Psychologist, 77(8), 875–886.
https://doi.org/10.1037/amp0000995

Abstract

The American Psychological Association’s (APA’s) Ethical Principles of Psychologists and Code of Conduct (American Psychological Association, 2017b; hereinafter referred to as the Ethics Code) does not contain an enforceable standard regarding psychologists’ role in either honoring or protecting the free speech of others, or ensuring that their own free speech is protected, including an important corollary of free speech, the protection of academic freedom. Illiberal acts illegitimately restrict civil liberties. We argue that the ethics of illiberal acts have not been adequately scrutinized in the Ethics Code. Psychologists require free speech to properly enact their roles as scientists as well as professionals who wish to advocate for their clients and students to enhance social justice. This article delineates criteria for what ought to be included in the Ethics Code, argues that ethical issues regarding the protection of free speech rights meet these criteria, and proposes language to be added to the Ethics Code.

Impact Statement

Freedom of speech is a fundamental civil right and currently has come under threat. Psychologists can only perform their duties as scientists, educators, or practitioners if they are not censored or fear censorship. The American Psychological Association’s (APA’s) Ethics Code contains no enforceable ethical standard to protect freedom of speech for psychologists. This article examines the ethics of free speech and argues for amending the APA Ethics Code to more clearly delineate psychologists’ rights and duties regarding free speech. This article argues that such protection is an ethical matter and for specific language to be included in the Ethics Code.

Conclusions

Free speech is central not only within the political sphere but also for the proper functioning of scholars and educators. Unfortunately, the ethics of free speech are not properly explicated in the current version of the American Psychological Association’s Ethics Code and this is particularly concerning given data that indicate a waning appreciation and protection of free speech in a variety of contexts. This article argues for fulsome protection of free speech rights by the inclusion of a clear and well-articulated statement in the Ethics Code of the psychologist’s duties related to free speech. Psychologists are committed to social justice and there can be no social justice without free speech.

Monday, December 26, 2022

Is loneliness in emerging adults increasing over time? A preregistered cross-temporal meta-analysis and systematic review

Buecker, S., Mund, M., Chwastek, S., Sostmann, M.,
& Luhmann, M. (2021). 
Psychological Bulletin, 147(8), 787–805.

Abstract

Judged by the sheer amount of global media coverage, loneliness rates seem to be an increasingly urgent societal concern. From the late 1970s onward, the life experiences of emerging adults have been changing massively due to societal developments such as increased fragmentation of social relationships, greater mobility opportunities, and changes in communication due to technological innovations. These societal developments might have coincided with an increase in loneliness in emerging adults. In the present preregistered cross-temporal meta-analysis, we examined whether loneliness levels in emerging adults have changed over the last 43 years. Our analysis is based on 449 means from 345 studies with 437 independent samples and a total of 124,855 emerging adults who completed the University of California Los Angeles (UCLA) Loneliness Scale between 1976 and 2019. Averaged across all studies, loneliness levels linearly increased with increasing calendar years (β = .224, 95% CI [.138, .309]). This increase corresponds to 0.56 standard deviations on the UCLA Loneliness Scale over the 43-year studied period. Overall, the results imply that loneliness can be a rising concern in emerging adulthood. Although the frequently used term “loneliness epidemic” seems exaggerated, emerging adults should therefore not be overlooked when designing interventions against loneliness.

Impact Statement

Public Significance Statement—The present cross-temporal meta-analysis suggests that loneliness in emerging adults slightly increased over historical time from 1976 until 2019. Consequently, emerging adults should not be overlooked when designing future interventions or public health campaigns against loneliness.

From the Discussion Section

Contrary to the idea that loneliness has sharply increased since smartphones gained market saturation (in about 2012; Twenge et al., 2018), our data showed that loneliness in emerging adults remained relatively stable since 2012 but gradually increased when looking at longer periods (i.e., from 1976 until 2019). It, therefore, seems unlikely that the increased smartphone use has led to increases in emerging adults’ loneliness. However, other societal developments since the late 1970s, such as greater mobility and fragmentation of social networks, may explain increases in emerging adults’ loneliness over historical time. Since our meta-analysis cannot provide information on other age  groups such as children and  adolescents,  the  role  of  smartphone  use  on  loneliness  could  be different in other age groups. 

Sunday, December 25, 2022

Belief in karma is associated with perceived (but not actual) trustworthiness

H.H. Ong, A.M. Evans, et al.
Judgment and Decision Making, Vol. ‍17,
No. ‍2, March 2022, pp. 362-377

Abstract

Believers of karma believe in ethical causation where good and bad outcomes can be traced to past moral and immoral acts. Karmic belief may have important interpersonal consequences. We investigated whether American Christians expect more trustworthiness from (and are more likely to trust) interaction partners who believe in karma. We conducted an incentivized study of the trust game where interaction partners had different beliefs in karma and God. Participants expected more trustworthiness from (and were more likely to trust) karma believers. Expectations did not match actual behavior: karmic belief was not associated with actual trustworthiness. These findings suggest that people may use others' karmic belief as a cue to predict their trustworthiness but would err when doing so.

From the Discussion Section

We asked whether people perceive individuals who believe in karma, compared with those who do not, to be more trustworthy. In an incentivized study of American Christians, we found evidence that this was indeed the case. People expected interaction partners who believed in karma to behave in a more trustworthy manner and trusted these individuals more. Additionally, this tendency did not differ across the perceiver’s belief in karma.

While perceivers expected individuals who believed in karma to be more trustworthy, the individuals’ actual trustworthy behavior did not differ across their belief in karma. This discrepancy indicates that, although participants in our study used karmic belief as a cue when making trustworthiness judgment, it did not track actual trustworthiness. The absence of an association between karmic belief and actual trustworthy behavior among participants in the trustee role may seem to contradict prior research which found that reminders of karma increased generous behavior in dictator games (White et al., 2019; Willard et al., 2020). However, note that our study did not involve any conspicuous reminders of karma – there was only a single question asking if participants believe in karma. Thus, it may be that those who believe in karma would behave in a more trustworthy manner only when the concept is made salient.

Although we had found that karma believers were perceived as more trustworthy, the psychological explanation(s) for this finding remains an open question. One possible explanation is that karma is seen as a source of supernatural justice and that individuals who believe in karma are expected to behave in a more trustworthy manner in order to avoid karmic ]punishment and/or to reap karmic rewards. 


Saturday, December 24, 2022

How Stable are Moral Judgments?

Rehren, P., Sinnott-Armstrong, W.
Rev. Phil.Psych. (2022).
https://doi.org/10.1007/s13164-022-00649-7

Abstract

Psychologists and philosophers often work hand in hand to investigate many aspects of moral cognition. In this paper, we want to highlight one aspect that to date has been relatively neglected: the stability of moral judgment over time. After explaining why philosophers and psychologists should consider stability and then surveying previous research, we will present the results of an original three-wave longitudinal study. We asked participants to make judgments about the same acts in a series of sacrificial dilemmas three times, 6–8 days apart. In addition to investigating the stability of our participants’ ratings over time, we also explored some potential explanations for instability. To end, we will discuss these and other potential psychological sources of moral stability (or instability) and highlight possible philosophical implications of our findings.

From the General Discussion

We have argued that the stability of moral judgments over time is an important feature of moral cognition for philosophers and psychologists to consider. Next, we presented an original empirical study into the stability over 6–8 days of moral judgments about acts in sacrificial dilemmas. Like Helzer et al. (2017, Study 1), we found an overall test-retest correlation of 0.66. Moreover, we observed moderate to large proportions of rating shifts, and small to moderate proportions of rating revisions (M = 14%), rejections (M = 5%) and adoptions (M = 6%)—that is, the participants in question judged p in one wave, but did not judge p in the other wave.

What Explains Instability?

One potential explanation of our results is that they are not a genuine feature of moral judgments about sacrificial dilemmas, but instead are due to measurement error. Measurement error is the difference between the observed and the true value of a variable. So, it may be that most of the rating changes we observed do not mean that many real-life moral judgments about acts in sacrificial dilemmas are (or would be) unstable over short periods of time. Instead, it may be that when people make moral judgments about sacrificial dilemmas in real life, their judgments remain very stable from one week to the next, but our study (perhaps any study) was not able to capture this stability.

To the extent that real-life moral judgment is what moral psychologists and philosophers are interested in, this may suggest a problem with the type of study design used in this and many other papers. If there is enough measurement error, then it may be very difficult to draw firm conclusions about real-life moral judgments from this research. Other researchers have raised related objections. Most forcefully, Bauman et al. (2014) have argued that participants often do not take the judgment tasks used by moral psychologists seriously enough for them to engage with these tasks in anything like the way they would if they came across the same tasks in the real world (also, see, Ryazanov et al. 2018). In our view, moral psychologists would do well to more frequently move their studies outside of the (online) lab and into the real world (e.g., Bollich et al. 2016; Hofmann et al. 2014).

(cut)

Instead, our findings may tell us something about a genuine feature of real-life moral judgment. If so, then a natural question to ask is what makes moral judgments unstable (or stable) over time. In this paper, we have looked at three possible explanations, but we did not find evidence for them. First, because sacrificial dilemmas are in a certain sense designed to be difficult, moral judgments about acts in these scenarios may give rise to much more instability than moral judgments about other scenarios or statements. However, when we compared our test-retest correlations with a sampling of test-retest correlations from instruments involving other moral judgments, sacrificial dilemmas did not stand out. Second, we did not find evidence that moral judgment changes occur because people are more confident in their moral judgments the second time around. Third, Study 1b did not find evidence that rating changes, when they occurred, were often due to changes in light of reasons and reflection. Note that this does not mean that we can rule out any of these potential explanations for unstable moral judgments completely. As we point out below, our research is limited in the extent to which it could test each of these explanations, and so one or more of them may still have been the cause for some proportion of the rating changes we observed.

Friday, December 23, 2022

One thought too few: Why we punish negligence

Sarin, A., & Cushman, F. A. (2022, November 7).
https://doi.org/10.31234/osf.io/mj769

Abstract

Why do we punish negligence? Leading accounts explain away the punishment of negligence as a consequence of other, well-known phenomena: outcome bias, character inference, or the volitional choice not to exercise due care. Although they capture many important cases, these explanations fail to account for others. We argue that, in addition to these phenomena, there is something both fundamental and unique to the punishment of negligence itself: People hold others directly responsible for the basic fact of failing to bring to mind information that would help them to avoid important risks. In other words, we propose that at its heart negligence is a failure of thought. Drawing on the current literature in moral psychology, we suggest that people find it natural to punish such failures, even when they don’t arise from conscious, volitional choice. Then, drawing on the literature on how thoughts come to mind, we argue that punishing a person for forgetting will help them remember in the future. This provides new insight on the structure and function of our tendency to punish negligent actions.

Conclusion

Why do we punish negligence? Psychologists and philosophers have traditionally offered two answers: Outcome bias (a punitive response elicited by the harm caused) and lack of due care (a punitive response elicited by the antecedent intentional choices that made negligence possible). These factors doubtlessly contribute in many cases, and they align well with psychological models that  posit  causation  and  intention  as  the  primary  determinants of punishment (Cushman, 2008; Laurent et al., 2016; Nobes et al., 2009; Shultz et al., 1986). Another potential explanation, rooted in character-based models of moral  judgment (Gray et al., 2012; Malle, 2011; A. Smith, 2017; Sripada, 2016; Uhlmann et al., 2015), is that  negligence speaks to an insufficient concern for others.

These models each attempt to “explain away” negligence as an outgrowth of other, better-understood parts of our moral psychology. We have argued, however, that there is something both fundamental and unique to negligence itself: That people simply hold others responsible for the basic fact of forgetting(or, more broadly, failing to call mind) things that would have made them act better.  In other words, at its heart, negligence is a failure of thought–a failure to make relevant dispositional knowledge occurrent at the right time.

Our challenge, then,  is to explain the design principles behind this mechanism of moral judgment. If we hold people directly responsible for their failures of thought, what purpose does this serve? To address this question, we draw on the literature on how thoughts come to mind.  It offers a model both of how negligence occurs, and why punishing such involuntary forgetting is adaptive. Value determines which  actions, outcomes, and pieces of knowledge come to mind. Specifically, actions come to mind when they have high value, outcomes when they have high absolute value, and other sorts of knowledge structures when they contribute in valuable ways to the task at hand. After an action is chosen and executed, a person receives various kinds of positive and negative feedback –environmental, social, and internal. All kinds of feedback alter value –of actions, outcomes, and other knowledge structures.  Value and feedback therefore form a self-reinforcing loop: value determines what comes to mind and feedback (rewards and punishments) update value.

Thursday, December 22, 2022

In the corner of an Australian lab, a brain in a dish is playing a video game - and it’s getting better

Liam Mannix
Sydney Morning Herald
Originally posted 13 NOV 22

Here is an excerpt:

Artificial intelligence controls an ever-increasing slice of our lives. Smart voice assistants hang on our every word. Our phones leverage machine learning to recognise our face. Our social media lives are controlled by algorithms that surface content to keep us hooked.

These advances are powered by a new generation of AIs built to resemble human brains. But none of these AIs are really intelligent, not in the human sense of the word. They can see the superficial pattern without understanding the underlying concept. Siri can read you the weather but she does not really understand that it’s raining. AIs are good at learning by rote, but struggle to extrapolate: even teenage humans need only a few sessions behind the wheel before the can drive, while Google’s self-driving car still isn’t ready after 32 billion kilometres of practice.

A true ‘general artificial intelligence’ remains out of reach - and, some scientists think, impossible.

Is this evidence human brains can do something special computers never will be able to? If so, the DishBrain opens a new path forward. “The only proof we have of a general intelligence system is done with biological neurons,” says Kagan. “Why would we try to mimic what we could harness?”

He imagines a future part-silicon-part-neuron supercomputer, able to combine the raw processing power of silicon with the built-in learning ability of the human brain.

Others are more sceptical. Human intelligence isn’t special, they argue. Thoughts are just electro-chemical reactions spreading across the brain. Ultimately, everything is physics - we just need to work out the maths.

“If I’m building a jet plane, I don’t need to mimic a bird. It’s really about getting to the mathematical foundations of what’s going on,” says Professor Simon Lucey, director of the Australian Institute for Machine Learning.

Why start the DishBrains on Pong? I ask. Because it’s a game with simple rules that make it ideal for training AI. And, grins Kagan, it was one of the first video game ever coded. A nod to the team’s geek passions - which run through the entire project.

“There’s a whole bunch of sci-fi history behind it. The Matrix is an inspiration,” says Chong. “Not that we’re trying to create a Matrix,” he adds quickly. “What are we but just a goooey soup of neurons in our heads, right?”

Maybe. But the Matrix wasn’t meant as inspiration: it’s a cautionary tale. The humans wired into it existed in a simulated reality while machines stole their bioelectricity. They were slaves.

Is it ethical to build a thinking computer and then restrict its reality to a task to be completed? Even if it is a fun task like Pong?

“The real life correlate of that is people have already created slaves that adore them: they are called dogs,” says Oxford University’s Julian Savulescu.

Thousands of years of selective breeding has turned a wild wolf into an animal that enjoys rounding up sheep, that loves its human master unconditionally.

Wednesday, December 21, 2022

Do You Really Want to Read What Your Doctor Writes About You?

Zoya Qureshi
The Atlantic
Originally posted 15 NOV 22

You may not be aware of this, but you can read everything that your doctor writes about you. Go to your patient portal online, click around until you land on notes from your past visits, and read away. This is a recent development, and a big one. Previously, you always had the right to request your medical record from your care providers—an often expensive and sometimes fruitless process—but in April 2021, a new federal rule went into effect, mandating that patients have the legal right to freely and electronically access most kinds of notes written about them by their doctors.

If you’ve never heard of “open notes,” as this new law is informally called, you’re not the only one. Doctors say that the majority of their patients have no clue. (This certainly has been the case for all of the friends and family I’ve asked.) If you do know about the law, you likely know a lot about it. That’s typically because you’re a doctor—one who now has to navigate a new era of transparency in medicine—or you’re someone who knows a doctor, or you’re a patient who has become intricately familiar with this country’s health system for one reason or another.

When open notes went into effect, the change was lauded by advocates as part of a greater push toward patient autonomy and away from medical gatekeeping. Previously, hospitals could charge up to hundreds of dollars to release records, if they released them at all. Many doctors, meanwhile, have been far from thrilled about open notes. They’ve argued that this rule will introduce more challenges than benefits for both patients and themselves. At worst, some have fretted, the law will damage people’s trust of doctors and make everyone’s lives worse.

A year and a half in, however, open notes don’t seem to have done too much of anything. So far, they have neither revolutionized patient care nor sunk America’s medical establishment. Instead, doctors say, open notes have barely shifted the clinical experience at all. Few individual practitioners have been advertising the change, and few patients are seeking it out on their own. We’ve been left with a partially implemented system and a big unresolved question: How much, really, should you want to read what your doctor is writing about you?

(cut)

Open notes are only part of this conversation. The new law also requires that test results be made immediately available to patients, meaning that patients might see their health information before their physician does. Although this is fine for the majority of tests, problems arise when results are harbingers of more complex, or just bad, news. Doctors I spoke with shared that some of their patients have suffered trauma from learning about their melanoma or pancreatic cancer or their child’s leukemia from an electronic message in the middle of the night, with no doctor to call and talk through the seriousness of that result with. This was the case for Tara Daniels, a digital-marketing consultant who lives near Boston. She’s had leukemia three times, and learned about the third via a late-night notification from her patient portal. Daniels appreciates the convenience of open notes, which help her keep track of her interactions with various doctors. But, she told me, when it comes to instant results, “I still hold a lot of resentment over the fact that I found out from test results, that I had to figure it out myself, before my doctor was able to tell me.”

Tuesday, December 20, 2022

Doesn't everybody jaywalk? On codified rules that are seldom followed and selectively punished

Wylie, J., & Gantman, A.
Cognition, Volume 231, February 2023, 105323

Abstract

Rules are meant to apply equally to all within their jurisdiction. However, some rules are frequently broken without consequence for most. These rules are only occasionally enforced, often at the discretion of a third-party observer. We propose that these rules—whose violations are frequent, and enforcement is rare—constitute a unique subclass of explicitly codified rules, which we call ‘phantom rules’ (e.g., proscribing jaywalking). Their apparent punishability is ambiguous and particularly susceptible to third-party motives. Across six experiments, (N = 1,440) we validated the existence of phantom rules and found evidence for their motivated enforcement. First, people played a modified Dictator Game with a novel frequently broken and rarely enforced rule (i.e., a phantom rule). People enforced this rule more often when the “dictator” was selfish (vs. fair) even though the rule only proscribed fractional offers (not selfishness). Then we turned to third person judgments of the U.S. legal system. We found these violations are recognizable to participants as both illegal and commonplace (Experiment 2), differentiable from violations of prototypical laws (Experiments 3) and enforced in a motivated way (Experiments 4a and 4b). Phantom rule violations (but not prototypical legal violations) are seen as more justifiably punished when the rule violator has also violated a social norm (vs. rule violation alone)—unless the motivation to punish has been satiated (Experiment 5). Phantom rules are frequently broken, codified rules. Consequently, their apparent punishability is ambiguous, and their enforcement is particularly susceptible to third party motives.

General Discussion

In this paper, we identified a subset of rules, which are explicitly codified (e.g., in professional tennis, in an economic game, by the U.S. legal system), frequently violated, and rarely enforced. As a result, their apparent punishability is particularly ambiguous and subject to motivation. These rules show us that codified rules, which are meant to apply equally to all, can be used to sanction behaviors outside of their jurisdiction. We named this subclass of rules phantom rules and found evidence that people enforce them according to their desire to punish a different behavior (i.e., a social norm violation), recognize them in the U.S. legal system, and employ motivated reasoning to determine their punishability. We hypothesized and found, across behavioral and survey experiments, that phantom rules—rules where the descriptive norms of enforcement are low—seem enforceable, punishable, and legitimate only when one has an external active motivation to punish. Indeed, we found that phantom rules were judged to be more justifiably enforced and more morally wrong to violate when the person who broke the rule had also violated a social norm—unless they were also punished for that social norm violation. Together, we take this as evidence of the existence of phantom rules and the malleability of their apparent punishability via active (vs. satiated) punishment motivation.

The ambiguity of phantom rule enforcement makes it possible for them to serve a hidden function; they can be used to punish behavior outside of the purview of the official rules. Phantom rule violations are technically wrong, but on average, seen as less morally wrong.This means, for the most part, that people are unlikely to feel strongly when they see these rules violated, and indeed, people frequently violate phantom rules without consequence. This pattern fits well with previous work in experimental philosophy that shows that motivations can affect how we reason about what constitutes breaking a rule in the first place. For example, when rule breaking occurs blameless (e.g., unintentionally), people are less likely to say a rule was violated at all and look for reasons to excuse their behavior(Turri, 2019; Turri & Blouw, 2015). Indeed, our findings mirror this pattern. People find a reason to punish phantom rule violations only when people are particularly or dispositionally motivated to punish.

Monday, December 19, 2022

Socially evaluative contexts facilitate mentalizing

Woo, B. M., Tan, E., Yuen, F. L, & Hamlin, J. K.
Trends in Cognitive Sciences, Month 2022, 
Vol. xx, No. xx

Abstract

Our ability to understand others’ minds stands at the foundation of human learning, communication, cooperation, and social life more broadly. Although humans’ ability to mentalize has been well-studied throughout the cognitive sciences, little attention has been paid to whether and how mentalizing differs across contexts. Classic developmental studies have examined mentalizing within minimally social contexts, in which a single agent seeks a neutral inanimate object. Such object-directed acts may be common, but they are typically consequential only to the object-seeking agent themselves. Here, we review a host of indirect evidence suggesting that contexts providing the opportunity to evaluate prospective social partners may facilitate mentalizing across development. Our article calls on cognitive scientists to study mentalizing in contexts where it counts.

Highlights

Cognitive scientists have long studied the origins of our ability to mentalize. Remarkably little is known, however, about whether there are particular contexts where humans are more likely to mentalize.
We propose that mentalizing is facilitated in contexts where others’ actions shed light on their status as a good or bad social partner. Mentalizing within socially evaluative contexts supports effective partner choice.

Our proposal is based on three lines of evidence. First, infants leverage their understanding of others’ mental states to evaluate others’ social actions. Second, infants, children, and adults demonstrate enhanced mentalizing within socially evaluative contexts. Third, infants, children, and adults are especially likely to mentalize when agents cause negative outcomes.  Direct tests of this proposal will contribute to a more comprehensive understanding of human mentalizing.

Concluding remarks

Mental state reasoning is not only used for social evaluation, but may be facilitated, and even overactivated, when humans engage in social evaluation. Human infants begin mentalizing in socially evaluative contexts as soon as they do so in nonevaluative contexts, if not earlier, and mental state representations across human development may be stronger in socially evaluative contexts, particularly when there are negative outcomes. This opinion article supports the possibility that mentalizing is privileged within socially evaluative contexts, perhaps due to its key role in facilitating the selection of appropriate cooperative partners. Effective partner choice may provide a strong foundation upon which humans’ intensely interdependent and cooperative nature can flourish.

The work cited herein is highly suggestive, and more work is clearly needed to further explore this possibility (see Outstanding questions). We have mostly reviewed and compared data across experiments that have studied mentalizing in either socially evaluative or nonevaluative contexts, pulling from a wide range of ages and methods; to our knowledge, no research has directly compared both socially evaluative and nonevaluative contexts within the same experiment.  Experiments using stringent minimal contrast designs would provide stronger tests of our central claims. In addition to such experiments, in the same way that meta-analyses have explored other predictors of mentalizing, we call on future researchers to conduct meta-analyses of findings that come from socially evaluative and nonevaluative contexts. We look forward to such research, which together will move us towards a more comprehensive understanding of humans’ early mentalizing.