Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, October 14, 2024

This AI chatbot got conspiracy theorists to question their convictions

Helena Kudiabor
Nature.com
Originally posted 12 Sept 24

Researchers have shown that artificial intelligence (AI) could be a valuable tool in the fight against conspiracy theories, by designing a chatbot that can debunk false information and get people to question their thinking.

In a study published in Science on 12 September1, participants spent a few minutes interacting with the chatbot, which provided detailed responses and arguments, and experienced a shift in thinking that lasted for months. This result suggests that facts and evidence really can change people’s minds.

“This paper really challenged a lot of existing literature about us living in a post-truth society,” says Katherine FitzGerald, who researches conspiracy theories and misinformation at Queensland University of Technology in Brisbane, Australia.

Previous analyses have suggested that people are attracted to conspiracy theories because of a desire for safety and certainty in a turbulent world. But “what we found in this paper goes against that traditional explanation”, says study co-author Thomas Costello, a psychology researcher at American University in Washington DC. “One of the potentially cool applications of this research is you could use AI to debunk conspiracy theories in real life.”


Here are some thoughts:

Researchers have developed an AI chatbot capable of effectively debunking conspiracy theories and influencing believers to reconsider their views. The study challenges prevailing notions about the intractability of conspiracy beliefs and suggests that well-presented facts and evidence can indeed change minds.

The custom-designed chatbot, based on OpenAI's GPT-4 Turbo, was trained to argue convincingly against various conspiracy theories. In conversations averaging 8 minutes, the chatbot provided detailed, tailored responses to participants' beliefs. The results were remarkable: participants' confidence in their chosen conspiracy theory decreased by an average of 21%, with 25% moving from confidence to uncertainty. These effects persisted in follow-up surveys conducted two months later.

This research has important implications for combating the spread of harmful conspiracy theories, which can have serious societal impacts. The study's success opens up potential applications for AI in real-world interventions against misinformation. However, the researchers acknowledge limitations, such as the use of paid survey respondents, and emphasize the need for further studies to refine the approach and ensure its effectiveness across different contexts and populations.

Sunday, October 13, 2024

Negative news headlines are more attractive: negativity bias in online news reading and sharing

Zhang, M., Wu, H., et al. (2024).
Current Psychology.

Abstract

Clickbait—online content designed to attract attention and clicks through misleading or exaggerated headlines—has become a prevalent phenomenon in online news. Previous research has sparked debate over the effectiveness of clickbait strategies and whether a bias toward negativity or positivity drives online news engagement. To clarify these issues, we conducted two studies. Study 1 examined participants’ preferences for news headlines, revealing a higher selection rate for negative headlines. This finding indicates a negativity bias in the news reading process and underscores the effectiveness of negative information in clickbait strategies. Study 2 simulated the process of news sharing and examined how participants generalize and report negative news. The findings show that participants amplified the negativity of the original news by using more negative terms or introducing new negative language, demonstrating an even stronger negativity bias during news sharing. These findings affirm the presence of a negativity bias in online engagement, in reading and sharing news. This study offers psychological insights into the clickbait phenomenon and provides theoretical support and practical implications for future research on negativity bias in online news.

The research is cited above.

Do moral values change with the seasons?

Hohm, I., O’Shea, B. A., & Schaller, M. (2024).
PNAS, 121(33).

Abstract

Moral values guide consequential attitudes and actions. Here, we report evidence of seasonal variation in Americans’ endorsement of some—but not all—moral values. Studies 1 and 2 examined a decade of data from the United States (total N = 232,975) and produced consistent evidence of a biannual seasonal cycle in values pertaining to loyalty, authority, and purity (“binding” moral values)—with strongest endorsement in spring and autumn and weakest endorsement in summer and winter—but not in values pertaining to care and fairness (“individualizing” moral values). Study 2 also provided some evidence that the summer decrease, but not the winter decrease, in binding moral value endorsement was stronger in regions with greater seasonal extremity. Analyses on an additional year of US data (study 3; n = 24,199) provided further replication and showed that this biannual seasonal cycle cannot be easily dismissed as a sampling artifact. Study 4 provided a partial explanation for the biannual seasonal cycle in Americans’ endorsement of binding moral values by showing that it was predicted by an analogous seasonal cycle in Americans’ experience of anxiety. Study 5 tested the generalizability of the primary findings and found similar seasonal cycles in endorsement of binding moral values in Canada and Australia (but not in the United Kingdom). Collectively, results from these five studies provide evidence that moral values change with the seasons, with intriguing implications for additional outcomes that can be affected by those values (e.g., intergroup prejudices, political attitudes, legal judgments).

Significance

We report evidence that people’s moral values change with the seasons. Analyses of a decade of data (232,975 questionnaire responses from 2011 to 2020) revealed a consistent seasonal cycle in Americans’ endorsement of moral values pertaining to loyalty, authority, and purity (with stronger endorsement in spring and autumn and weaker endorsement in summer and winter). This seasonal cycle was partially explained by an analogous seasonal cycle in Americans’ experience of anxiety. Similar seasonal cycles were observed in data from Canada and Australia (but not the United Kingdom). These findings have implications for attitudes and actions that can be affected by moral values, including intergroup prejudices, political ideologies, and legal judgments.

The article is linked above. It is paywalled.

Here are some thoughts: 

Recent research reveals that seasons may influence moral decision-making at a population level. The study found that binding values, which include loyalty, authority, and purity, exhibit biannual patterns, peaking in spring and fall while dipping in winter and summer. In contrast, individualizing values, such as care and fairness, remain relatively stable across seasons. This seasonal pattern was consistent over multiple years in the United States, Canada, and Australia, although it was not observed in the United Kingdom. Additionally, the researchers discovered that population-level anxiety patterns correlate with fluctuations in binding values, with anxiety peaking in spring and fall. These increases in anxiety may be linked to seasonal transitions in school and work, which can contribute to feelings of threat and a desire for group cohesion.

The implications of this research suggest that morality may be less rational and objective than commonly believed. While dramatic moral shifts do not occur at the individual level, collective small shifts in individual moral thinking could influence broader societal trends, such as discrimination, legal systems, and public compliance with government advice. Understanding these subtle influences can provide valuable insights into population-level behavioral trends and help us better anticipate seasonal shifts in social and political dynamics.

Saturday, October 12, 2024

Human embryo models are getting more realistic — raising ethical questions

Smriti Mallapaty
nature.com
Originally posted 11 Sept 24

Here is an excerpt:

Science accelerates

Meanwhile, the science keeps moving at such a pace that regulators have a lot to keep up with. In June 2024, the ISSCR announced that it had set up a working group to assess the state of the science and review earlier guidelines, in light of the models published since 2021.

In 2023, around half a dozen teams described models that recapitulate the development of embryos just after implantation. Two models in particular were widely covered by the media — one by Magdalena Zernicka-Goetz, a developmental biologist at the California Institute for Technology in Pasadena, and one by Jacob Hanna, a stem-cell biologist at the Weizmann Institute of Science in Rehovot, Israel. They were described as complete post-implantation models, but that title has been hotly debated.

“These are not complete models,” says Rivron. The one by Zernicka-Goetz’s group7 doesn’t have cells that behave like trophoblasts, which provide nutrition for the embryo — and although Hanna’s8 does contain a trophoblast-like layer, it isn’t as organized as the real thing, say researchers.

“It’s almost like a beauty contest — whose ‘model’ looked better,” says Jianping Fu, a bioengineer at the University of Michigan in Ann Arbor. “There’s a lot of excitement, but at the same time, there’s some hype in the field right now.”

Some researchers question the value of chasing a complete model. It’s a “pretty exquisite balancing act”, says Hyun. Researchers want models to resemble an embryo closely enough that they provide real insight into human development but not so closely that they can’t tell the difference between the two, and so risk restrictions to their work. “You want to skate as close to the edge as possible, without falling over,” he says.

Some researchers try to avoid this ethical dilemma by intentionally introducing changes to their embryo models that would make it impossible for the model to result in an organism. For example, Hanna has started working on models in which genes involved in brain and heart development have been inactivated. He has inferred from discussions with Christian and Jewish leaders in his community that an embryo model lacking brain or heart tissue would not be considered a form of person.

The info is here.

Here are some thoughts:

Scientists have made significant strides in developing sophisticated "embryo models" using stem cells that closely mimic aspects of early human development. These models present exciting opportunities to explore critical areas such as embryo development, infertility, and disease prevention, while also raising important ethical questions. Key advancements include the creation of "blastoids," which resemble early embryos at the blastocyst stage, as well as models that capture post-implantation development and gastrulation. Additionally, researchers have developed organ-specific models, such as those representing the neural tube and somites, which are crucial for understanding organogenesis.

The potential applications of these models are vast, including studying the causes of early pregnancy loss, improving the success rates of in vitro fertilization (IVF), testing drug safety, and producing blood stem cells for transplants. However, the rapid advancement of this field has led to ongoing challenges related to ethical and regulatory considerations, such as defining the distinction between embryos and embryo models, establishing limits on culturing duration, and imposing restrictions on implantation into animals or humans. As the field continues to progress, there is a pressing need for ongoing ethical guidance and oversight, as well as public engagement and transparency to address societal concerns.

Friday, October 11, 2024

Burnout, racial trauma, and protective experiences of Black psychologists and counselors

Brown, E. M., et al. (2024).
Psychological trauma : theory, research, practice
and policy, 10.1037/tra0001726.
Advance online publication.

Abstract

Objective: The present study explored rates of burnout and racial trauma among 182 Black mental health professionals (BMHPs) and utilized racial-cultural theory to explore potential protective factors against burnout and racial trauma.

Method: We collected data from 182 Black psychologists and counselors who were active mental health professionals during 2020. Descriptive statistics, multivariate analyses of variance, follow-up univariate analyses of variance, bivariate correlations, and multiple regression analyses were used.

Results: Both burnout and racial trauma were considerably higher among BMHPs than has been reported across general samples of helping professionals and across a sample of Black participants across the United States. Differences among rates of burnout and racial trauma existed across genders and specialties (i.e., counseling and psychology). Higher levels of social support and an external locus of control significantly predicted lower levels of burnout and racial trauma. In addition, higher levels of resilient coping predicted lower levels of burnout. Last, more frequent meetings with a mentor significantly predicted lower levels of racial trauma.

Conclusions: Results from this study suggest that BMHPs may be more susceptible to burnout and race-based traumatic stress as a result of their work.

Clinical Impact Statement

The purpose of this study was to examine the rates of burnout and racial trauma of Black mental health
professionals in the wake of COVID and the racial unrest of 2020. It was found that Black mental health
professionals had significantly high rates of burnout and racial trauma. Previous studies have shown that
high levels of burnout and race-based traumatic stress can be detrimental to one’s mental and physical
health. Therefore, results show that greater attention needs to be given to the well-being of Black mental
health professionals to support them in their work.

The article is paywalled.

Here are some thoughts:

Black mental health professionals (BMHPs) face significant challenges, including high rates of burnout and racial trauma, particularly in the wake of recent racial and political unrest. This study found that BMHPs, especially those with master's degrees, experience higher levels of burnout and racial trauma compared to other helping professionals. However, social support, mentoring, and a strong sense of calling to the Black community can serve as protective factors against these negative impacts. The study underscores the importance of providing greater support to BMHPs, particularly during times of heightened racial tension, to help them cope with the immense stress and trauma they encounter in their work.

Thursday, October 10, 2024

Moral Disagreement across Politics is Explained by Different Assumptions about who is Most Vulnerable to Harm

Womick, J., et al. (2024). 
PsyArXiv Preprints

Abstract

Liberals and conservatives disagree about morality, but explaining this disagreement does not require different moral foundations. All people share a common harm-based mind, making moral judgments based on what seems to cause harm—but people make different assumptions of who or what is especially vulnerable to harm. Liberals and conservatives emphasize different victims. Across eight studies, we validate a brief face-valid assessment of assumptions of vulnerability (AoVs) across methodologies and samples, linking AoVs to scenario judgments, implicit attitudes, and charity behaviors. AoVs, especially about the Environment, the Othered, the Powerful, the Divine, help explain political disagreement about hot-button issues surrounding abortion, immigration, sacrilege, gay rights, polluting, race, and policing. Liberals seem to amplify differences in vulnerability, splitting the world into the very vulnerable versus the very invulnerable, while conservatives dampen differences, seeing all people as similarly vulnerable to harm. AoVs reveal common cognition—and potential common ground—among moral disagreement.


Here are some thoughts: 

The study explores the origins of moral disagreement between liberals and conservatives. It argues that both groups share a common harm-based moral framework, but differ in their assumptions about who or what is particularly vulnerable to harm. Liberals emphasize the vulnerability of the marginalized, while conservatives focus on the vulnerability of traditional power structures. These differing perspectives shape their moral judgments and political disagreements on various issues. The study concludes that by understanding these differing assumptions of vulnerability, we can gain a better understanding of moral disagreement and potentially find common ground.

Wednesday, October 9, 2024

The rise of checkbox AI ethics: a review

Kijewski, S., Ronchi, E., & Vayena, E. (2024).
AI And Ethics.

Abstract
The rapid advancement of artificial intelligence (AI) sparked the development of principles and guidelines for ethical AI by a broad set of actors. Given the high-level nature of these principles, stakeholders seek practical guidance for their implementation in the development, deployment and use of AI, fueling the growth of practical approaches for ethical AI. This paper reviews, synthesizes and assesses current practical approaches for AI in health, examining their scope and potential to aid organizations in adopting ethical standards. We performed a scoping review of existing reviews in accordance with the PRISMA extension for scoping reviews (PRISMA-ScR), systematically searching databases and the web between February and May 2023. A total of 4284 documents were identified, of which 17 were included in the final analysis. Content analysis was performed on the final sample. We identified a highly heterogeneous ecosystem of approaches and a diverse use of terminology, a higher prevalence of approaches for certain stages of the AI lifecycle, reflecting the dominance of specific stakeholder groups in their development, and several barriers to the adoption of approaches. These findings underscore the necessity of a nuanced understanding of the implementation context for these approaches and that no one-size-fits-all approach exists for ethical AI. While common terminology is needed, this should not come at the cost of pluralism in available approaches. As governments signal interest in and develop practical approaches, significant effort remains to guarantee their validity, reliability, and efficacy as tools for governance across the AI lifecycle.


Here are some thoughts:

The scoping review reveals a complex and varied landscape of practical approaches to ethical AI, marked by inconsistent terminology and a lack of consensus on defining characteristics such as purpose and target audience. Currently, there is no unified understanding of terms like "tools," "toolkits," and "frameworks" related to ethical AI, which complicates their implementation in governance. A clear categorization of these approaches is essential for policymakers, as the diversity in terminology and ethical principles suggests that no single method can effectively promote AI ethics. Implementing these approaches necessitates a comprehensive understanding of the operational context of AI and the ethical concerns involved.

While there is a pressing need to standardize terminology, this should not come at the expense of diversity, as different contexts may require distinct approaches. The review indicates significant variation in how these approaches apply across the AI lifecycle, with many focusing on early stages like design and development, while guidance for later stages is notably lacking. This gap may be influenced by the private sector's dominant role in AI system design and the associated governance mechanisms, which often prioritize reputational risk management over comprehensive ethical oversight.

The review raises three critical questions: First, whether the rise of practical approaches to AI ethics represents a business opportunity, potentially leading to a proliferation of options but lacking rigorous evaluation. Second, it questions the robustness of these approaches for monitoring AI systems, highlighting a shortage of practical methods for auditing and impact assessment. Third, it suggests that effective AI governance may require context-specific approaches, advocating for standards like "ethical disclosure by default" to enhance transparency and accountability.

Significant barriers to the adoption of these approaches have been identified, including the high levels of expertise and resources required, a general lack of awareness, and the absence of effective measurement methods for successful implementation. The review emphasizes the need for practical validation metrics to assess compliance with ethical principles, as measuring the impact of AI ethics remains challenging.

Tuesday, October 8, 2024

Culture shapes moral reasoning about close others

Baldwin, C. R., et al. (2024).
Journal of Experimental Psychology:
General, 153(9), 2345–2358.

Abstract

Moral norms balance the needs of the group versus individuals, and societies across the globe vary in terms of the norms they prioritize. Extant research indicates that people from Western cultures consistently choose to protect (vs. punish) close others who commit crimes. Might this differ in cultural contexts that prioritize the self less? Prior research presents two compelling alternatives. On the one hand, collectivists may feel more intertwined with and tied to those close to them, thus protecting close others more. On the other hand, they may prioritize society over individuals and thus protect close others less. Four studies (N = 2,688) performed in the United States and Japan provide self-report, narrative, and experimental evidence supporting the latter hypothesis. These findings highlight how personal relationships and culture dynamically interact to shape how we think about important moral decisions.

Impact Statement

Public Significance Statement—Modern civilization is built on rules about how to behave. Yet, in Western cultures, when these rules are violated by people we know and love, people consistently dismiss them. Here, we demonstrate that this propensity to protect close others is powerfully influenced by culture. In four studies, we provide evidence (N = 2,688) that people from Japan—a culture in which individual interests are prioritized less than in the United States—are less likely to protect close others who transgress out of concern for the impact on society. We also demonstrate that this cultural difference disappears when people from Japan are themselves the victims, a scenario in which societal interests are muted and personal interests are focal. This work highlights how personal relationships and culture dynamically interact to shape how we think about important moral decisions.

The article is paywalled.

Here are some thoughts:

Cultural differences in moral decision-making regarding close others who commit crimes have been observed between Western and Eastern societies. Four studies conducted in the United States and Japan (N = 2,688) reveal that Japanese participants are less inclined to protect close others who transgress compared to Americans. This difference stems from Japanese prioritizing societal concerns over individual relationships. The research employed various methods, including self-report, narrative, and experimental designs, consistently demonstrating this cultural divergence. Importantly, the influence of close relationships on moral reasoning was evident across all samples, but its strength was attenuated among Japanese participants and Americans primed with social norms emphasizing society over individuals. When the societal implications of a crime were minimized, the cultural difference disappeared, highlighting the mechanism driving this effect. These findings illustrate how culture modulates the impact of close relationships on moral reasoning through superordinate goals (e.g., protecting the self vs. society). The results challenge the common assumption that collectivistic societies prioritize close social relationships more than individualistic ones. Instead, they suggest that Japanese interdependence is defined more in terms of societal obligations rather than specific relationships. This research contributes to a more nuanced understanding of how culture and personal relationships dynamically interact to shape important moral decisions, and it emphasizes the need for studying moral decision-making in diverse cultural contexts

Monday, October 7, 2024

Prediction of Future Parkinson Disease Using Plasma Proteins Combined With Clinical-Demographic Measures

You, J., et al. (2024).
Neurology, 103(3).

Abstract

Background and Objectives

Identification of individuals at high risk of developing Parkinson disease (PD) several years before diagnosis is crucial for developing treatments to prevent or delay neurodegeneration. This study aimed to develop predictive models for PD risk that combine plasma proteins and easily accessible clinical-demographic variables.

Results

A total of 52,503 participants without PD (median age 58, 54% female) were included. Over a median follow-up duration of 14.0 years, 751 individuals were diagnosed with PD (median age 65, 37% female). Using a forward selection approach, we selected a panel of 22 plasma proteins for optimal prediction. Using an ensemble tree-based Light Gradient Boosting Machine (LightGBM) algorithm, the model achieved an area under the receiver operating characteristic curve (AUC) of 0.800 (95% CI 0.785–0.815). The LightGBM prediction model integrating both plasma proteins and clinical-demographic variables demonstrated enhanced predictive accuracy, with an AUC of 0.832 (95% CI 0.815–0.849). Key predictors identified included age, years of education, history of traumatic brain injury, and serum creatinine. The incorporation of 11 plasma proteins (neurofilament light, integrin subunit alpha V, hematopoietic PGD synthase, histamine N-methyltransferase, tubulin polymerization promoting protein family member 3, ectodysplasin A2 receptor, Latexin, interleukin-13 receptor subunit alpha-1, BAG family molecular chaperone regulator 3, tryptophanyl-TRNA synthetase, and secretogranin-2) augmented the model's predictive accuracy. External validation in the PPMI cohort confirmed the model's reliability, producing an AUC of 0.810 (95% CI 0.740–0.873). Notably, alterations in these predictors were detectable several years before the diagnosis of PD.

Discussion

Our findings support the potential utility of a machine learning-based model integrating clinical-demographic variables with plasma proteins to identify individuals at high risk for PD within the general population. Although these predictors have been validated by PPMI, additional validation in a more diverse population reflective of the general community is essential.

The article is cited above, but paywalled.

Here are some thoughts:

A recent study published in Neurology demonstrates the potential for early detection of Parkinson's disease (PD) using machine learning techniques. Researchers developed a predictive model that analyzes blood proteins in conjunction with clinical data, allowing for the identification of individuals at high risk of developing PD up to 15 years before symptoms appear. The study involved over 50,000 participants from the UK Biobank, focusing on 1,463 different blood plasma proteins. By employing machine learning to identify patterns in protein levels alongside clinical information—such as age, history of brain injuries, and blood creatinine levels—the researchers were able to achieve significant accuracy in predicting Parkinson's risk.

The findings revealed 22 specific proteins that are significantly associated with the risk of developing PD, including neurofilament light (NfL), which is linked to brain cell damage, as well as various proteins involved in inflammation and muscle function. This model not only offers a non-invasive and cost-effective screening method but also presents opportunities for early intervention and improved disease management, potentially enabling the development and assessment of neuroprotective treatments.
However, the study does have limitations that warrant consideration. The participant population was predominantly of European descent, which may limit the generalizability of the findings to more diverse groups. Additionally, the reliance on medical records for PD diagnosis raises concerns about potential misdiagnoses. Future research will need to validate the model in diverse populations and utilize more precise measurement techniques for protein levels. Longitudinal studies that incorporate repeated measurements could further enhance the predictive power of the model.

Overall, this groundbreaking research offers new hope for the early detection and intervention of Parkinson's disease, potentially revolutionizing the approach to managing this neurodegenerative disorder.

Sunday, October 6, 2024

ABA amicus brief asserts ban on gender-affirming care denies equal protection

American Bar Association
Press Release
Originally released 3 September 24

The American Bar Association filed an amicus brief today with the U.S. Supreme Court, arguing that a Tennessee law that prohibits gender-affirming medical care for minors violates the equal protection clause of the Fourteenth Amendment.

The ABA brief, filed in support of the federal government’s challenge to the law, contends that the Tennessee law, known as Senate Bill (SB1), impermissibly denies the fundamental right to medical autonomy for certain groups while allowing it for others.

“Equal protection forbids differential treatment in the exercise of important constitutional rights absent the strongest justification, and Tennessee’s SB1 cannot withstand scrutiny under that standard,” the ABA brief says. It adds: “The ABA has recognized in its past policy statements, state policy denying any individual access to needed medical care for reasons wholly unrelated to any medical justification — as SB1 does — is inimical to equality and equal dignity before the law.”

Gender-affirming care typically encompasses a range of social, psychological, behavioral and medical interventions that support and affirm an individual’s gender identity when it conflicts with the gender they were assigned at birth. In June, the U.S. Supreme Court agreed to consider the constitutionality of SB1 in its new term that begins Oct. 7. Media reports indicate that 26 Republican-controlled states in the past three years have enacted laws restricting such care for minors.

In its brief, the ABA outlines its lengthy record of supporting LGBTQ rights, first urging the repeal of laws criminalizing private sexual relations between consenting adults more than a half century ago. Most recently in August, the ABA House of Delegates adopted policy urging legal protection of access to gender-affirming care.

“The right of patients to access treatment without arbitrary governmental interference is grounded in the common-law right of bodily integrity and self-determination, as well as liberty interests protected by the Fourteenth Amendment,” the brief says, citing the U.S. Supreme Court’s prior decisions.

The ABA brief in U.S. v. Skrmetti, which asks that the Supreme Court reverse the appeals court decision upholding the law, is here. The law firm of Arnold & Porter Kaye Scholer LLP filed the brief pro bono on behalf of the ABA.

Saturday, October 5, 2024

Consensus, controversy, and chaos in the attribution of characteristics to the morally exceptional

Fleeson, W., Furr, R. et al. (2023).
Journal of Personality, 92(3), 715–734.

Abstract

Objective
What do people see as distinguishing the morally exceptional from others? To handle the problem that people may disagree about who qualifies as morally exceptional, we asked subjects to select and rate their own examples of morally exceptional, morally average, and immoral people.

Method
Subjects rated each selected exemplar on several enablers of moral action and several directions of moral action. By applying the logic underlying stimulus sampling in experimental design, we evaluated perceivers’ level of agreement about the characteristics of the morally exceptional, even though perceivers rated different targets.

Results
Across three studies, there was strong subjective consensus on who is morally exceptional: those who are empathetic and prone to guilt, those who reflect on moral issues and identify with morality, those who have self-control and actually enact moral behaviors, and those who care about harm, compassion, fairness, and honesty. Deep controversies also existed about the moral directions pursued by those seen as morally exceptional: People evaluated those who pursued similar values and made similar decisions more favorably.

Conclusion
Strong consensus suggests characteristics that may push a person to go beyond normal expectations, that the study of moral exceptionality is not overly hindered by disagreement over who is morally exceptional, and that there is some common ground between disagreeing camps.

The article is linked above.

Here are some thoughts:

The research explores the perception of morally exceptional individuals compared to those deemed typically moral or immoral, revealing a significant consensus on the characteristics that distinguish moral exceptionalism. Across three studies, participants identified that those considered morally exceptional possess traits such as empathy, self-control, and a strong moral identity, which enable them to act on their moral judgments. This consensus offers optimism for further research into moral exceptionalism, suggesting that despite widespread disagreement on specific moral issues, there is a shared understanding of the enablers of moral behavior.

The findings indicate that while there is agreement on the attributes of morally exceptional individuals, controversies arise regarding the moral judgments they make. Notably, individuals who share similar moral values with perceivers are more likely to be recognized as morally exceptional. This highlights a potential area for future research to explore these dynamics further.

The results also contribute to the understanding of moral pluralism, indicating that while there is consensus on the processes that characterize moral exceptionality, there remains considerable debate over the specific moral domains valued by these individuals. This suggests that moral exemplars may not exhibit unified virtue across all moral domains, prompting further investigation into how moral understanding can be enhanced through recognition of moral processes rather than solely focusing on moral content.

In summary, the research reveals a complex landscape of moral perception, where agreement exists on the enablers of moral action, yet significant differences persist in the moral judgments made by individuals. This duality presents opportunities for fostering dialogue and understanding across diverse moral perspectives.

Friday, October 4, 2024

New Ethics Opinion Addresses Obligations Associated With Collateral Information

Moran, M. (2024).
Psychiatric News, 59(09). 

What are a psychiatrist’s ethical obligations regarding confidentiality of sources of collateral information obtained in the involuntary hospitalization of a patient?

In a new opinion added to “The Principles of Medical Ethics With Annotations Especially Applicable to Psychiatry,” the APA Ethics Committee underscored that a psychiatrist’s overriding ethical obligation is to the safety of the patient, and that there can be no guarantee of confidentiality to family members or other sources who provide information that is used during involuntary hospitalization.

“Psychiatrists deal with collateral information in clinical practice routinely,” said Ethics Committee member Gregory Barber, M.D. “It’s a standard part of the job to collect collateral information in cases where a patient is hospitalized, especially involuntarily, and there can be a lot of complicated interpersonal dynamics that come up when family members provide that information.

“We obtain collateral information from people who know a patient well as a way to ensure we have a full clinical picture regarding the patient’s situation,” Barber said. “But our ethical obligations are to the patient and the patient’s safety. Psychiatrists do not have an established doctor-patient relationship with the source of collateral information and do not have obligations to keep the source hidden from patients. And we should not make guarantees that the information will remain confidential.”


Here are some thoughts:

Psychiatrists' ethical obligations regarding confidentiality of collateral information in involuntary hospitalization prioritize patient safety. While they should strive to protect sources' privacy, this may be secondary to ensuring the patient's well-being. Transparency and open communication with both the patient and the collateral source are essential for building trust and preventing conflicts.

Thursday, October 3, 2024

How a Leading Chain of Psychiatric Hospitals Traps Patients

Jessica Silver-Greenberg & Katie Thomas
The New York Times
Originally posted 1 September 24

Acadia Healthcare is one of America’s largest chains of psychiatric hospitals. Since the pandemic exacerbated a national mental health crisis, the company’s revenue has soared. Its stock price has more than doubled.

But a New York Times investigation found that some of that success was built on a disturbing practice: Acadia has lured patients into its facilities and held them against their will, even when detaining them was not medically necessary.

In at least 12 of the 19 states where Acadia operates psychiatric hospitals, dozens of patients, employees and police officers have alerted the authorities that the company was detaining people in ways that violated the law, according to records reviewed by The Times. In some cases, judges have intervened to force Acadia to release patients.

Some patients arrived at emergency rooms seeking routine mental health care, only to find themselves sent to Acadia facilities and locked in.

A social worker spent six days inside an Acadia hospital in Florida after she tried to get her bipolar medications adjusted. A woman who works at a children’s hospital was held for seven days after she showed up at an Acadia facility in Indiana looking for therapy. And after police officers raided an Acadia hospital in Georgia, 16 patients told investigators that they had been kept there “with no excuses or valid reason,” according to a police report.


Here are some thoughts:

Acadia Healthcare, a leading provider of psychiatric hospitals in the United States, has seen significant revenue growth amid the ongoing mental health crisis. However, an investigation by The New York Times revealed that the company has been accused of detaining patients against their will, even when it wasn't medically necessary.

The investigation uncovered numerous instances where patients, employees, and police officers reported Acadia's practices violating the law. Patients were often held for longer than necessary, sometimes for financial reasons rather than medical ones. Some patients were even lured into facilities under false pretenses.

Despite the allegations, Acadia has maintained that its practices are driven by medical necessity and patient safety. However, the company has faced multiple investigations and settlements related to its practices.

The investigation raises serious questions about the quality of care provided by for-profit psychiatric hospitals and the potential for abuse when financial incentives outweigh patient needs.

Wednesday, October 2, 2024

Remotely administered non-deceptive placebos reduce COVID-related stress, anxiety, and depression

Guevarra, D. A., Webster, C. T, et al. (2024).
Applied Psychology Health and Well-Being.

Abstract

Research suggests that placebos administered without deception (i.e. non-deceptive placebos) may provide an effective and low-effort intervention to manage stress and improve mental health. However, whether non-deceptive placebos administered remotely online can manage distress for people at risk for developing high levels of affective symptoms remains unclear. Volunteers experiencing prolonged stress from the COVID-19 pandemic were recruited into a randomized controlled trial to examine the efficacy of a non-deceptive placebo intervention administered remotely online on affective outcomes. COVID-related stress, overall stress, anxiety, and depression were assessed at baseline, midpoint, and endpoint. Compared with the control group, participants in the non-deceptive placebo group reported significant reductions from baseline in all primary affective outcomes after 2 weeks. Additionally, participants in the non-deceptive placebo group found the intervention feasible, acceptable, and appropriate for the context. Non-deceptive placebos, even when administered remotely online, offer an alternative and effective way to help people manage prolonged stress. Future large-scale studies are needed to determine if non-deceptive placebos can be effective across different prolonged stress situations and for clinical populations.

The article is linked above.

Here are some thoughts:

Non-deceptive placebos (NDPs) have shown promise as an effective and low-effort intervention for managing stress and improving mental health, even when administered remotely online. A randomized controlled trial examined the efficacy of NDPs on volunteers experiencing prolonged stress from the COVID-19 pandemic. The study found that participants in the NDP group reported significant reductions in COVID-related stress, overall stress, anxiety, and depression compared to the control group after just two weeks. The effect sizes were comparable to those seen in self-guided online cognitive behavioral therapy programs. Participants in the NDP group also reported high expectations of benefits and found the intervention feasible, acceptable, and appropriate for the context. These findings suggest that NDPs, even when administered remotely, can help moderately at-risk individuals manage their psychological health during prolonged stressful situations. The study's results are consistent with previous research showing positive effects of NDPs on affect-related outcomes. However, the researchers noted some limitations, including a small and demographically limited sample size, potential response bias, and the need for more diverse and larger-scale studies. Despite these limitations, the study highlights the potential of NDPs as a scalable, easily implementable secondary intervention to help prevent medium-risk populations from progressing to clinical levels of affective symptoms. Future research should focus on examining the efficacy of NDPs across different stress situations and for clinical populations, as well as investigating the mechanisms through which NDPs exert their effects.

I often think about psychotherapy's potential placebo effects (and how to possibly incorporate this into my informed consent to treatment). Psychotherapy's relationship to placebo effects is a complex and debated topic in the field of mental health. While psychotherapy shares some key mechanisms with placebos, such as the importance of patient expectations, the therapeutic alliance, and nonspecific factors like empathy and attention, it would be overly reductive to label psychotherapy as merely a placebo. Psychotherapy has specific, theorized mechanisms of action and has been shown in well-designed studies to outperform placebo controls, albeit sometimes by small margins. However, the similarities between psychotherapy and placebo effects highlight the importance of these "common factors" in treatment outcomes.  Mental health professionals need to consider the potential placebo effects in psychotherapy.

Tuesday, October 1, 2024

Death threats, legal risk and backlogs weigh on clinicians treating trans minors

Emma Davis
NBC news
Originally posted 28 August 24

Dr. Kade Goepferd has received death threats for their work treating transgender youths at Children’s Minnesota Hospital, but Goepferd said the harassment isn’t the most worrying part of the job. 

“The waitlist is what keeps me up at night,” said Goepferd, who uses they/them pronouns. “It has grown every year, and it got particularly long after the bans went into effect.”

Goepferd is the medical director of the hospital’s Gender Health Program, the only multispeciality pediatric gender clinic in Minnesota. The program has experienced a 30% increase in calls since surrounding states outlawed gender-affirming care for minors, and the waitlist is now at least a year for new patients, even after Goepferd hired additional staff to help the hundreds of trans youths requesting appointments.

Twenty-six states now have restrictions on transgender health care for minors, according to the LGBTQ think tank Movement Advancement Project. The laws have left those still able to provide this type of care, like Goepferd, struggling to keep up with demand.

NBC News spoke to a dozen clinicians in states where gender-affirming care for minors remains legal, from Connecticut to California, and found all are treating transgender youths fleeing bans. Not only does the surge in out-of-state and newly relocated patients create logistical challenges — from waitlists to insurance denials — it also presents a legal risk for health care professionals. Although some states have enacted protections for gender-affirming care providers, these shield laws remain untested in court, and they have done little to deter anti-trans attacks. Many doctors said they’ve had to take added security measures as transphobic rhetoric has intensified.

“There’s been a growing awareness over the last year that the environment is only getting more and more dangerous for providers,” said Kellan Baker, executive director of the Whitman-Walker Institute, a nonprofit advancing LGBTQ health care.


Here are some thoughts:

The situation described in this article raises significant concerns across multiple domains. The long waitlists and limited availability of gender-affirming care pose serious ethical issues, conflicting with the principle of beneficence in medical ethics and potentially exacerbating mental health issues among transgender youth.

The threats and harassment faced by healthcare providers not only raise concerns about their safety and wellbeing but also could deter professionals from offering essential care. The legal ambiguity surrounding gender-affirming care in different states puts providers in a difficult position, forcing them to navigate between professional judgment and legal risks. This hostile environment, combined with the constant legal uncertainties, is likely causing significant stress and burnout among healthcare providers, which could impact the quality of care they're able to provide.

The healthcare system itself faces numerous challenges, including strained resources due to the influx of out-of-state patients, insurance and cost barriers creating healthcare equity issues, and limitations on training opportunities for new providers potentially leading to future workforce shortages. These issues reflect broader societal concerns, including the politicization of healthcare and potential discrimination against transgender individuals, raising civil rights concerns.

The current state of transgender rights presents a complex interplay of ethical, psychological, and systemic challenges that require careful consideration and balanced approaches to ensure both patient care and provider safety. Moving forward, it will be crucial for policymakers, healthcare professionals, and society at large to engage in thoughtful dialogue and evidence-based decision-making to address these multifaceted issues.

Monday, September 30, 2024

Antidiscrimination Law Meets AI—New Requirements for Clinicians, Insurers, & Health Care Organizations

Mello, M. M., & Roberts, J. L. (2024).
JAMA Health Forum, 5(8), e243397–e243397.

Responding to the threat that biased health care artificial intelligence (AI) tools pose to health equity, the US Department of Health and Human Services Office for Civil Rights (OCR) published a final rule in May 2024 holding AI users legally responsible for managing the risk of discrimination. This move raises questions about the rule’s fairness and potential effects on AI-enabled health care.

The New Regulatory Requirements

Section 1557 of the Affordable Care Act prohibits recipients of federal funding from discriminating in health programs and activities based on race, color, national origin, sex, age, or disability. Regulated entities include health care organizations, health insurers, and clinicians that participate in Medicare, Medicaid, or other programs. The OCR’s rule sets forth the obligations of these entities relating to the use of decision support tools in patient care, including AI-driven tools and simpler, analog aids like flowcharts and guidelines.

The rule clarifies that Section 1557 applies to discrimination arising from use of AI tools and establishes new legal requirements. First, regulated entities must make “reasonable efforts” to determine whether their decision support tools use protected traits as input variables or factors. Second, for tools that do so, organizations “must make reasonable efforts to mitigate the risk of discrimination.”

Starting in May 2025, the OCR will address potential violations of the rule through complaint-driven investigations and compliance reviews. Individuals can also seek to enforce Section 1557 through private lawsuits. However, courts disagree about whether private actors can sue for disparate impact (practices that are neutral on their face but have discriminatory effects).

---------------------

Here are some thoughts:

Addressing Bias in Healthcare AI: New Regulatory Requirements and Implications

The US Department of Health and Human Services Office for Civil Rights (OCR) has issued a final rule holding healthcare providers liable for managing the risk of discrimination in AI tools used in patient care. This move aims to address the threat of biased healthcare AI tools to health equity.

New Regulatory Requirements

The OCR's rule clarifies that Section 1557 of the Affordable Care Act applies to discrimination arising from the use of AI tools. Regulated entities must make "reasonable efforts" to determine whether their decision support tools use protected traits as input variables or factors. If so, they must mitigate the risk of discrimination.

Fairness and Enforcement

The rule raises questions about fairness and potential effects on AI-enabled healthcare. While the OCR's approach is flexible, it may create uncertainty for regulated entities. The rule applies only to organizations using AI tools, not developers, who are regulated by other federal rules. The OCR's enforcement will focus on complaint-driven investigations and compliance reviews, with penalties including corrective action plans.

Implications and Concerns

The rule may create market pressure for developers to generate and provide information about bias in their products. However, concerns remain about the compliance burden on adopters, particularly small physician practices and low-resourced organizations. The OCR must provide further guidance and clarification to ensure meaningful compliance.

Facilitating Meaningful Compliance

Additional resources are necessary to make compliance possible for all healthcare organizations. Emerging tools for bias assessment and affordable technical assistance are essential. The question of who will pay for AI assessments looms large, and the business case for adopting AI tools may evaporate if assessment and monitoring costs are high and not reimbursed.

Conclusion

The OCR's rule is an important step towards reducing discrimination in healthcare AI. However, realizing this vision requires resources to make meaningful compliance possible for all healthcare organizations. By addressing bias and promoting equity, we can ensure that AI tools benefit all patients, particularly vulnerable populations.

Sunday, September 29, 2024

Whistleblowing in science: this physician faced ostracization after standing up to pharma

Sara Reardon
nature.com
Originally posted 20 Aug 24

The image of a lone scientist standing up for integrity against a pharmaceutical giant seems romantic and compelling. But to haematologist Nancy Olivieri, who went public when the company sponsoring her drug trial for a genetic blood disorder tried to suppress data about harmful side effects, the experience was as unglamorous as it was damaging and isolating. “There’s a lot of people who fight for justice in research integrity and against the pharmaceutical industry, but very few people know what it’s like to take on the hospital administrators” too, she says.

Now, after more than 30 years of ostracization by colleagues, several job losses and more than 20 lawsuits — some of which are ongoing — Olivieri is still amazed that what she saw as efforts to protect her patients could have proved so controversial, and that so few people took her side. Last year, she won the John Maddox Prize, a partnership between the London-based charity Sense about Science and Nature, which recognizes “researchers who stand up and speak out for science” and who achieve changes amid hostility. “It’s absolutely astounding to me that you could become famous as a physician for saying, ‘I think there might be a complication here,’” she says. “There was a lot of really good work that we could have done that we wasted a lot of years not doing because of all this.”

Olivieri didn’t set out to be a troublemaker. As a young researcher at the University of Toronto (UT), Canada, in the 1980s, she worked with children with thalassaemia — a blood condition that prevents the body from making enough oxygen-carrying haemoglobin, and that causes a fatal build-up of iron in the organs if left untreated. She worked her way up to become head of the sickle-cell-disease programme at the city’s Hospital for Sick Children (SickKids). In 1989, she started a clinical trial at SickKids to test a drug called deferiprone that traps iron in the blood. The hospital eventually brought in a pharmaceutical company called Apotex, based in Toronto, Canada, to co-sponsor the study as part of regulatory requirements.


Here are some thoughts:

The case of Nancy Olivieri, a haematologist who blew the whistle on a pharmaceutical company's attempts to suppress data about harmful side effects of a drug, highlights the challenges and consequences faced by researchers who speak out against industry and institutional pressures. Olivieri's experience demonstrates how institutions can turn against researchers who challenge industry interests, leading to isolation, ostracization, and damage to their careers. Despite the risks, Olivieri's story emphasizes the crucial role of allies and support networks in helping whistle-blowers navigate the challenges they face.

The case also underscores the importance of maintaining research integrity and transparency, even in the face of industry pressure. Olivieri's experience shows that prioritizing patient safety and well-being over industry interests is critical, and institutions must be held accountable for their actions. Additionally, the significant emotional toll that whistle-blowing can take on individuals, including anxiety, isolation, and disillusionment, must be acknowledged.

To address these issues, policy reforms are necessary to protect researchers from retaliation and ensure that they can speak out without fear of retribution. Industry transparency is also essential to minimize conflicts of interest. Furthermore, institutions and professional organizations must establish support networks for researchers who speak out against wrongdoing.

Saturday, September 28, 2024

Humanizing Chatbots Is Hard To Resist — But Why?

Madeline G. Reinecke
Practical Ethics
Originally posted 30 Aug 24

You might recall a story from a few years ago, concerning former Google software engineer Blake Lemoine. Part of Lemoine’s job was to chat with LaMDA, a large language model (LLM) in development at the time, to detect discriminatory speech. But the more Lemoine chatted with LaMDA, the more he became convinced: The model had become sentient and was being deprived of its rights as a Google employee. 

Though Google swiftly denied Lemoine’s claims, I’ve since wondered whether this anthropomorphic phenomenon — seeing a “mind in the machine” — might be a common occurrence in LLM users. In fact, in this post, I’ll argue that it’s bound to be common, and perhaps even irresistible, due to basic facts about human psychology. 

Emerging work suggests that a non-trivial number of people do attribute humanlike characteristics to LLM-based chatbots. This is to say they “anthropomorphize” them. In one study, 67% of participants attributed some degree of phenomenal consciousness to ChatGPT: saying, basically, there is “something it is like” to be ChatGPT. In a separate survey, researchers showed participants actual ChatGPT transcripts, explaining that they were generated by an LLM. Actually seeing the natural language “skills” of ChatGPT further increased participants’ tendency to anthropomorphize the model. These effects were especially pronounced for frequent LLM users.

Why does anthropomorphism of these technologies come so easily? Is it irresistible, as I’ve suggested, given features of human psychology?


Here are some thoughts:

The article explores the phenomenon of anthropomorphism in Large Language Models (LLMs), where users attribute human-like characteristics to AI systems. This tendency is rooted in human psychology, particularly in our inclination to over-detect agency and our association of communication with agency. Studies have shown that a significant number of people, especially frequent users, attribute human-like characteristics to LLMs, raising concerns about trust, misinformation, and the potential for users to internalize inaccurate information.

The article highlights two key cognitive mechanisms underlying anthropomorphism. Firstly, humans have a tendency to over-detect agency, which may have evolved as an adaptive mechanism to detect potential threats. This is exemplified in a classic psychology study where participants attributed human-like actions to shapes moving on a screen. Secondly, language is seen as a sign of agency, even in preverbal infants, which may explain why LLMs' command of natural language serves as a psychological signal of agency.

The author argues that AI developers have a key responsibility to design systems that mitigate anthropomorphism. This can be achieved through design choices such as using disclaimers or avoiding the use of first-personal pronouns. However, the author also acknowledges that these measures may not be sufficient to override the deep tendencies of the human mind. Therefore, a priority for future research should be to investigate whether good technology design can help us resist the pitfalls of LLM-oriented anthropomorphism.

Ultimately, anthropomorphism is a double-edged sword, making AI systems more relatable and engaging while also risking misinformation and mistrust. By understanding the cognitive mechanisms underlying anthropomorphism, we can develop strategies to mitigate its negative consequences. Future research directions should include investigating effective interventions, exploring the boundaries of anthropomorphism, and developing responsible AI design guidelines that account for anthropomorphism.

Friday, September 27, 2024

Small town living: Unique ethical challenges of rural pediatric integrated primary care

Jaques-Leonard, M. L., et al. (2021).
Clinical Practice in Pediatric Psychology,
9(4), 412–422.

Abstract

Objective: The objective of this paper is to address ethical and training considerations with behavioral health (BH) services practicing within rural, integrated primary care (IPC) sites through the conceptual framework of an ethical acculturation model.

Method: Relevant articles are presented along with a description of how the acculturation model can be implemented to address ethical dilemmas.

Results: Recommendations are provided regarding practice considerations when using the acculturation model and the utility of the model for both established BH practitioners and trainees.

Conclusions: Psychologists integrated into rural IPC teams may be able to enhance their ethical practice and improve outcomes for patients and families through the use of the acculturation model. Psychologists serving as supervisors can utilize the acculturation model to provide valuable experiences to trainees in addressing ethical dilemmas when competing ethical principles are present.

Impact Statement

Implications for Impact Statement: By addressing ethical dilemmas through an acculturation model, psychologists may prevent themselves from drifting away from American Psychological Association ethical principles within the context of a multidisciplinary team while simultaneously providing valuable learning opportunities for trainees. This focus is particularly important in rural settings where access to specialty care and other resources are limited, and a psychologist may be the only licensed behavioral health provider on a multidisciplinary team.

Thursday, September 26, 2024

Decoding loneliness: Can explainable AI help in understanding language differences in lonely older adults?

Wang, N., et al. (2024).
Psychiatry research, 339, 116078.

Abstract

Study objectives
Loneliness impacts the health of many older adults, yet effective and targeted interventions are lacking. Compared to surveys, speech data can capture the personalized experience of loneliness. In this proof-of-concept study, we used Natural Language Processing to extract novel linguistic features and AI approaches to identify linguistic features that distinguish lonely adults from non-lonely adults.

Methods
Participants completed UCLA loneliness scales and semi-structured interviews (sections: social relationships, loneliness, successful aging, meaning/purpose in life, wisdom, technology and successful aging). We used the Linguistic Inquiry and Word Count (LIWC-22) program to analyze linguistic features and built a classifier to predict loneliness. Each interview section was analyzed using an explainable AI (XAI) model to classify loneliness.

Results
The sample included 97 older adults (age 66–101 years, 65 % women). The model had high accuracy (Accuracy: 0.889, AUC: 0.8), precision (F1: 0.8), and recall (1.0). The sections on social relationships and loneliness were most important for classifying loneliness. Social themes, conversational fillers, and pronoun usage were important features for classifying loneliness.

Conclusions
XAI approaches can be used to detect loneliness through the analyses of unstructured speech and to better understand the experience of loneliness.
------------

Here are some thoughts.  AI has the potential to be helpful for mental health professionals.

Scientists have made a groundbreaking discovery in detecting loneliness through artificial intelligence (AI). A recent study published reveals that AI can identify loneliness by analyzing unstructured speech patterns. This innovative approach offers a promising solution for addressing loneliness, particularly among older adults.

The analysis showed that lonely individuals frequently referenced social status, religion, and expressed more negative emotions. In contrast, non-lonely individuals focused on social connections, family, and lifestyle. Additionally, lonely individuals used more first-person singular pronouns, indicating a self-focused perspective, whereas non-lonely individuals used more first-person plural pronouns, suggesting a sense of inclusion and connection.

Furthermore, the study found that conversational fillers, non-fluencies, and internet slang were more prevalent in the speech of lonely individuals. Lonely individuals also used more causation conjunctions, indicating a tendency to provide detailed explanations of their experiences. These findings suggest that the way people communicate may reflect their feelings about social relationships.

The AI model offers a scalable and less intrusive method for assessing loneliness, which can significantly impact mental and physical health, particularly in older adults. While the study has limitations, including a relatively small sample size, the researchers aim to expand their work to more diverse populations and explore how to better assess loneliness.

Wednesday, September 25, 2024

Vote for Kamala Harris to Support Science, Health and the Environment

The Editors
Scientific American
Originally posted 16 Sept 24

In the November election, the U.S. faces two futures. In one, the new president offers the country better prospects, relying on science, solid evidence and the willingness to learn from experience. She pushes policies that boost good jobs nationwide by embracing technology and clean energy. She supports education, public health and reproductive rights. She treats the climate crisis as the emergency it is and seeks to mitigate its catastrophic storms, fires and droughts.

In the other future, the new president endangers public health and safety and rejects evidence, preferring instead nonsensical conspiracy fantasies. He ignores the climate crisis in favor of more pollution. He requires that federal officials show personal loyalty to him rather than upholding U.S. laws. He fills positions in federal science and other agencies with unqualified ideologues. He goads people into hate and division, and he inspires extremists at state and local levels to pass laws that disrupt education and make it harder to earn a living.

Only one of these futures will improve the fate of this country and the world. That is why, for only the second time in our magazine’s 179-year history, the editors of Scientific American are endorsing a candidate for president. That person is Kamala Harris.

Before making this endorsement, we evaluated Harris’s record as a U.S. senator and as vice president under Joe Biden, as well as policy proposals she’s made as a presidential candidate. Her opponent, Donald Trump, who was president from 2017 to 2021, also has a record—a disastrous one. Let’s compare.


Here are some thoughts:

The upcoming U.S. presidential election presents two vastly different futures for the country. On one hand, Vice President Kamala Harris offers a vision built on science, evidence, and a willingness to learn from experience. Her policies focus on creating good jobs, promoting clean energy, supporting education, public health, and reproductive rights, and addressing the climate crisis.

On the other hand, former President Donald Trump's vision rejects evidence and relies on conspiracy theories. His policies endanger public health and safety, ignore the climate crisis, and promote division and extremism.

Key Policy Differences:
  • Healthcare: Harris supports expanding the Affordable Care Act and Medicaid, while Trump proposes cuts to Medicare and Medicaid and repealing the ACA.
  • Reproductive Rights: Harris advocates for reinstating Roe v. Wade protections, while Trump appointed justices who overturned it and restricts access to abortion.
  • Gun Safety: Harris supports closing gun-show loopholes, while Trump promises to undo Biden-Harris gun measures.
  • Environment and Climate: Harris acknowledges climate change and supports renewable energy, while Trump denies it and rolled back environmental policies.
  • Technology: Harris promotes safe AI development, while Trump's Project 2025 framework would overturn AI safeguards.
  • Economic Implications: Harris's platform aims to create jobs in rural America through renewable energy projects and increase tax deductions for small businesses. Trump's policies may lead to increased pollution, division, and economic uncertainty.
Conclusion:

The choice between Harris and Trump represents two distinct futures for the U.S. Harris offers a path forward guided by rationality and respect, while Trump promotes division and demagoguery ¹. The outcome of this election will significantly impact the country's direction.

Tuesday, September 24, 2024

This researcher wants to replace your brain, little by little

Antonio Regalado
MIT Technology Review
Originally posted 16 Aug 24

Jean Hébert, a new hire with the US Advanced Projects Agency for Health (ARPA-H), is expected to lead a major new initiative around “functional brain tissue replacement,” the idea of adding youthful tissue to people’s brains. 

President Joe Biden created ARPA-H in 2022, as an agency within the Department of Health and Human Services, to pursue what he called  “bold, urgent innovation” with transformative potential.

The brain renewal concept could have applications such as treating stroke victims, who lose areas of brain function. But Hébert, a biologist at the Albert Einstein school of medicine, has most often proposed total brain replacement, along with replacing other parts of our anatomy, as the only plausible means of avoiding death from old age.

As he described in his 2020 book, Replacing Aging, Hébert thinks that to live indefinitely people must find a way to substitute all their body parts with young ones, much like a high-mileage car is kept going with new struts and spark plugs.


Here are some thoughts:

The US Advanced Projects Agency for Health (ARPA-H) has taken a bold step by hiring Jean Hébert, a researcher who advocates for a radical plan to defeat death by replacing human body parts, including the brain. Hébert's idea involves progressively replacing brain tissue with youthful lab-made tissue, allowing the brain to adapt and maintain memories and self-identity. This concept is not widely accepted in the scientific community, but ARPA-H has endorsed Hébert's proposal with a potential $110 million project to test his ideas in animals.

From an ethical standpoint, Hébert's proposal raises concerns, such as the potential use of human fetuses as a source of life-extending parts and the creation of non-sentient human clones for body transplants. However, Hébert's idea relies on the brain's ability to adapt and reorganize itself, a concept supported by evidence from rare cases of benign brain tumors and experiments with fetal-stage cell transplants. The development of youthful brain tissue facsimiles using stem cells is a significant scientific challenge, requiring the creation of complex structures with multiple cell types.

The success of Hébert's proposal depends on various factors, including the ability of young brain tissue to function correctly in an elderly person's brain, establishing connections, and storing and sending electro-chemical information. Despite these uncertainties, ARPA-H's endorsement and potential funding of Hébert's proposal demonstrate a willingness to explore unconventional approaches to address aging and age-related diseases. This move may pave the way for future research in extreme life extension and challenge societal norms and values surrounding aging and mortality.

Hébert's work has sparked interest among immortalists, a fringe community devoted to achieving eternal life. His connections to this community and his willingness to explore radical approaches have made him an edgy choice for ARPA-H. However, his focus on the neocortex, the outer part of the brain responsible for most of our senses, reasoning, and memory, may hold the key to understanding how to replace brain tissue without losing essential functions. As Hébert embarks on this ambitious project, the scientific community will be watching closely to see if his ideas can overcome the significant scientific and ethical hurdles associated with replacing human brain tissue.

Monday, September 23, 2024

Generative AI Can Harm Learning

Bastani, H. et al. (July 15, 2024).
Available at SSRN:

Abstract

Generative artificial intelligence (AI) is poised to revolutionize how humans work, and has already demonstrated promise in significantly improving human productivity. However, a key remaining question is how generative AI affects learning, namely, how humans acquire new skills as they perform tasks. This kind of skill learning is critical to long-term productivity gains, especially in domains where generative AI is fallible and human experts must check its outputs. We study the impact of generative AI, specifically OpenAI's GPT-4, on human learning in the context of math classes at a high school. In a field experiment involving nearly a thousand students, we have deployed and evaluated two GPT based tutors, one that mimics a standard ChatGPT interface (called GPT Base) and one with prompts designed to safeguard learning (called GPT Tutor). These tutors comprise about 15% of the curriculum in each of three grades. Consistent with prior work, our results show that access to GPT-4 significantly improves performance (48% improvement for GPT Base and 127% for GPT Tutor). However, we additionally find that when access is subsequently taken away, students actually perform worse than those who never had access (17% reduction for GPT Base). That is, access to GPT-4 can harm educational outcomes. These negative learning effects are largely mitigated by the safeguards included in GPT Tutor. Our results suggest that students attempt to use GPT-4 as a "crutch" during practice problem sessions, and when successful, perform worse on their own. Thus, to maintain long-term productivity, we must be cautious when deploying generative AI to ensure humans continue to learn critical skills.


Here are some thoughts:

The deployment of GPT-based tutors in educational settings presents a cautionary tale. While generative AI tools like ChatGPT can make tasks significantly easier for humans, they also risk deteriorating our ability to effectively learn essential skills. This phenomenon is not new, as previous technologies like typing and calculators have also reduced the need for certain skills. However, ChatGPT's broader intellectual capabilities and propensity for providing incorrect responses make it unique.

Unlike earlier technologies, ChatGPT's unreliability and tendency to provide incorrect responses pose significant challenges. Students may struggle to detect these errors or be unwilling to invest the effort required to verify the accuracy of ChatGPT's responses. This can negatively impact their learning and understanding of critical skills. The text suggests that more work is needed to ensure generative AI enhances education rather than diminishes it.

The findings underscore the importance of critical thinking and media literacy in the age of AI. Educators must be aware of the potential risks and benefits of AI-powered tools and design them to augment human capabilities rather than replace them. Accountability and transparency in AI development and deployment are crucial to mitigating these risks. By acknowledging these challenges, we can harness the potential of AI to enhance education and promote meaningful learning.