Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, April 21, 2024

An Expert Who Has Testified in Foster Care Cases Across Colorado Admits Her Evaluations Are Unscientific

Eli Hager
Originally posted 18 March 24

Diane Baird had spent four decades evaluating the relationships of poor families with their children. But last May, in a downtown Denver conference room, with lawyers surrounding her and a court reporter transcribing, she was the one under the microscope.

Baird, a social worker and professional expert witness, has routinely advocated in juvenile court cases across Colorado that foster children be adopted by or remain in the custody of their foster parents rather than being reunified with their typically lower-income birth parents or other family members.

In the conference room, Baird was questioned for nine hours by a lawyer representing a birth family in a case out of rural Huerfano County, according to a recently released transcript of the deposition obtained by ProPublica.

Was Baird’s method for evaluating these foster and birth families empirically tested? No, Baird answered: Her method is unpublished and unstandardized, and has remained “pretty much unchanged” since the 1980s. It doesn’t have those “standard validity and reliability things,” she admitted. “It’s not a scientific instrument.”

Who hired and was paying her in the case that she was being deposed about? The foster parents, she answered. They wanted to adopt, she said, and had heard about her from other foster parents.

Had she considered or was she even aware of the cultural background of the birth family and child whom she was recommending permanently separating? (The case involved a baby girl of multiracial heritage.) Baird answered that babies have “never possessed” a cultural identity, and therefore are “not losing anything,” at their age, by being adopted. Although when such children grow up, she acknowledged, they might say to their now-adoptive parents, “Oh, I didn’t know we were related to the, you know, Pima tribe in northern California, or whatever the circumstances are.”

The Pima tribe is located in the Phoenix metropolitan area.


Here is my summary:

The article discusses Diane Baird, an expert who has testified in foster care cases across Colorado, admitting that her evaluations are unscientific. Baird, who has spent four decades evaluating the relationships of poor families with their children, labeled her method for assessing families as the "Kempe Protocol." This revelation raises concerns about the validity of her evaluations in foster care cases and highlights the need for more rigorous and scientific approaches in such critical assessments.

Saturday, April 20, 2024

The Dark Side of AI in Mental Health

Michael DePeau-Wilson
MedPage Today
Originally posted 11 April 24

With the rise in patient-facing psychiatric chatbots powered by artificial intelligence (AI), the potential need for patient mental health data could drive a boom in cash-for-data scams, according to mental health experts.

A recent example of controversial data collection appeared on Craigslist when a company called Therapy For All allegedly posted an advertisement offering money for recording therapy sessions without any additional information about how the recordings would be used.

The company's advertisement and website had already been taken down by the time it was highlighted by a mental health influencer on TikTokopens in a new tab or window. However, archived screenshots of the websiteopens in a new tab or window revealed the company was seeking recorded therapy sessions "to better understand the format, topics, and treatment associated with modern mental healthcare."

Their stated goal was "to ultimately provide mental healthcare to more people at a lower cost," according to the defunct website.

In service of that goal, the company was offering $50 for each recording of a therapy session of at least 45 minutes with clear audio of both the patient and their therapist. The company requested that the patients withhold their names to keep the recordings anonymous.


Here is a summary:

The article highlights several ethical concerns surrounding the use of AI in mental health care:

The lack of patient consent and privacy protections when companies collect sensitive mental health data to train AI models. For example, the nonprofit Koko used OpenAI's GPT-3 to experiment with online mental health support without proper consent protocols.

The issue of companies sharing patient data without authorization, as seen with the Crisis Text Line platform, which led to significant backlash from users.

The clinical risks of relying solely on AI-powered chatbots for mental health therapy, rather than having human clinicians involved. Experts warn this could be "irresponsible and ultimately dangerous" for patients dealing with complex, serious conditions.

The potential for unethical "cash-for-data" schemes, such as the Therapy For All company that sought to obtain recorded therapy sessions without proper consent, in order to train AI models.

Friday, April 19, 2024

Physicians, Spirituality, and Compassionate Patient Care

Daniel P. Sulmasy
The New England Journal of Medicine
March 16, 2024
DOI: 10.1056/NEJMp2310498

Mind, body, and soul are inseparable. Throughout human history, healing has been regarded as a spiritual event. Illness (especially serious illness) inevitably raises questions beyond science- questions of a transcendent nature. These are questions of meaning, value, and relationship. 1 They touch on perennial and profoundly human enigmas. Why is my child sick? Do I still have value now that I am no longer a "productive" working member of society? Why does brokenness in my body remind me of the brokenness in my relationships? Or conversely, why does brokenness in relationships so profoundly affect my body?

Historically, most people have turned to religious belief and practice to help answer such questions. Yet they arise for people of all religions and of no religion. These questions can aptly be called spiritual.

Whereas spirituality may be defined as the ways people live in relation to transcendent questions of meaning, value, and relationship, a religion involves a community of belief, texts, and practices sharing a common orientation toward these spiritual questions. The decline of religious belief and practice in Europe and North America over recent decades and a perceived conflict between science and religion have led many physicians to dismiss patients' spiritual and religious concerns as not relevant to medicine. Yet religion and spirituality are associated with a number of health care outcomes. Abundant data show that patients want their physicians to help address their spiritual needs, and that patients whose spiritual needs have been met are aided in making difficult decisions (particularly at the end of life), are more satisfied with their care, and report better quality of life.2.... Spiritual questions pervade all aspects of medical care, whether addressing self-limiting, chronic, or life-threatening conditions, and whether in inpatient or outpatient settings.

Beyond the data, however, many medical ethicists recognize that the principles of beneficence and respect for patients as whole persons require physicians to do more than attend to the details of physiological and anatomical derangements. Spirituality and religion are essential to many patients' identities as persons. Patients (and their families) experience illness, healing, and death as whole persons. Ignoring the spiritual aspects of their lives and identities is not respectful, and it divorces medical practice from a fundamental mode of patient experience and coping. Promoting the good of patients requires attention to their notion of the highest good. 


Here is my summary:

The article discusses the interconnectedness of mind, body, and soul in the context of healing and spirituality. It highlights how illness raises questions beyond science, touching on meaning, value, and relationships. While historically people turned to religious beliefs for answers, these spiritual questions are relevant to individuals of all faiths or no faith. The decline of religious practice in some regions has led to a dismissal of spiritual concerns in medicine, despite evidence showing the impact of spirituality on health outcomes. Patients desire their physicians to address their spiritual needs as it influences decision-making, satisfaction with care, and quality of life. Medical ethics emphasize the importance of considering patients as whole persons, including their spiritual identities. Physicians are encouraged to inquire about patients' spiritual needs respectfully, even if they do not share the same beliefs.

Thursday, April 18, 2024

An artificial womb could build a bridge to health for premature babies

Rob Stein
npr.org
Originally posted 12 April 24

Here is an excerpt:

Scientific progress prompts ethical concerns

But the possibility of an artificial womb is also raising many questions. When might it be safe to try an artificial womb for a human? Which preterm babies would be the right candidates? What should they be called? Fetuses? Babies?

"It matters in terms of how we assign moral status to individuals," says Mercurio, the Yale bioethicist. "How much their interests — how much their welfare — should count. And what one can and cannot do for them or to them."

But Mercurio is optimistic those issues can be resolved, and the potential promise of the technology clearly warrants pursuing it.

The Food and Drug Administration held a workshop in September 2023 to discuss the latest scientific efforts to create an artificial womb, the ethical issues the technology raises, and what questions would have to be answered before allowing an artificial womb to be tested for humans.

"I am absolutely pro the technology because I think it has great potential to save babies," says Vardit Ravitsky, president and CEO of The Hastings Center, a bioethics think tank.

But there are particular issues raised by the current political and legal environment.

"My concern is that pregnant people will be forced to allow fetuses to be taken out of their bodies and put into an artificial womb rather than being allowed to terminate their pregnancies — basically, a new way of taking away abortion rights," Ravitsky says.

She also wonders: What if it becomes possible to use artificial wombs to gestate fetuses for an entire pregnancy, making natural pregnancy unnecessary?


Here are some general ethical concerns:

The use of artificial wombs raises several ethical and moral concerns. One key issue is the potential for artificial wombs to be used to extend the limits of fetal viability, which could complicate debates around abortion access and the moral status of the fetus. There are also concerns that artificial wombs could enable "designer babies" through genetic engineering and lead to the commodification of human reproduction. Additionally, some argue that developing a baby outside of a woman's uterus is inherently "unnatural" and could undermine the maternal-fetal bond.

 However, proponents contend that artificial wombs could save the lives of premature infants and provide options for women with high-risk pregnancies.  

 Ultimately, the ethics of artificial womb technology will require careful consideration of principles like autonomy, beneficence, and justice as this technology continues to advance.

Wednesday, April 17, 2024

Do Obligations Follow the Mind or Body?

Protzko, J., Tobia, K., Strohminger, N., 
& Schooler, J. W. (2023).
Cognitive Science, 47(7).

Abstract

Do you persist as the same person over time because you keep the same mind or because you keep the same body? Philosophers have long investigated this question of personal identity with thought experiments. Cognitive scientists have joined this tradition by assessing lay intuitions about those cases. Much of this work has focused on judgments of identity continuity. But identity also has practical significance: obligations are tagged to one's identity over time. Understanding how someone persists as the same person over time could provide insight into how and why moral and legal obligations persist. In this paper, we investigate judgments of obligations in hypothetical cases where a person's mind and body diverge (e.g., brain transplant cases). We find a striking pattern of results: In assigning obligations in these identity test cases, people are divided among three groups: “body-followers,” “mind-followers,” and “splitters”—people who say that the obligation is split between the mind and the body. Across studies, responses are predicted by a variety of factors, including mind/body dualism, essentialism, education, and professional training. When we give this task to professional lawyers, accountants, and bankers, we find they are more inclined to rely on bodily continuity in tracking obligations. These findings reveal not only the heterogeneity of intuitions about identity but how these intuitions relate to the legal standing of an individual's obligations.

My summary:

Philosophers have grappled for centuries with the question of where our obligations lie, our body or mind, often considering it in the context of what defines us as individuals. This research delves into this question through thought experiments, like brain transplants. Interestingly, people have varying viewpoints. Some believe our obligations reside with the physical body, so the original owner would be responsible. Others argue the opposite, placing responsibility with the transplanted mind. There's even a third camp suggesting obligations are somehow shared between mind and body. The research suggests our stance on this issue might be influenced by our beliefs about the mind-body connection and even our profession.

Tuesday, April 16, 2024

As A.I.-Controlled Killer Drones Become Reality, Nations Debate Limits

Eric Lipton
The New York Times
Originally posted 21 Nov 23

Here is an excerpt:

Rapid advances in artificial intelligence and the intense use of drones in conflicts in Ukraine and the Middle East have combined to make the issue that much more urgent. So far, drones generally rely on human operators to carry out lethal missions, but software is being developed that soon will allow them to find and select targets more on their own.

The intense jamming of radio communications and GPS in Ukraine has only accelerated the shift, as autonomous drones can often keep operating even when communications are cut off.

“This isn’t the plot of a dystopian novel, but a looming reality,” Gaston Browne, the prime minister of Antigua and Barbuda, told officials at a recent U.N. meeting.

Pentagon officials have made it clear that they are preparing to deploy autonomous weapons in a big way.

Deputy Defense Secretary Kathleen Hicks announced this summer that the U.S. military would “field attritable, autonomous systems at scale of multiple thousands” in the coming two years, saying that the push to compete with China’s own investment in advanced weapons necessitated that the United States “leverage platforms that are small, smart, cheap and many.”

The concept of an autonomous weapon is not entirely new. Land mines — which detonate automatically — have been used since the Civil War. The United States has missile systems that rely on radar sensors to autonomously lock on to and hit targets.

What is changing is the introduction of artificial intelligence that could give weapons systems the capability to make decisions themselves after taking in and processing information.


Here is a summary:

This article discusses the debate at the UN regarding Lethal Autonomous Weapons (LAW) - essentially autonomous drones with AI that can choose and attack targets without human intervention. There are concerns that this technology could lead to unintended casualties, make wars more likely, and remove the human element from the decision to take a life.
  • Many countries are worried about the development and deployment of LAWs.
  • Austria and other countries are proposing a total ban on LAWs or at least strict regulations requiring human control and limitations on how they can be used.
  • The US, Russia, and China are opposed to a ban and argue that LAWs could potentially reduce civilian casualties in wars.
  • The US prefers non-binding guidelines over new international laws.
  • The UN is currently deadlocked on the issue with no clear path forward for creating regulations.

Monday, April 15, 2024

On the Ethics of Chatbots in Psychotherapy.

Benosman, M. (2024, January 7).
PsyArXiv Preprints
https://doi.org/10.31234/osf.io/mdq8v

Introduction:

In recent years, the integration of chatbots in mental health care has emerged as a groundbreaking development. These artificial intelligence (AI)-driven tools offer new possibilities for therapy and support, particularly in areas where mental health services are scarce or stigmatized. However, the use of chatbots in this sensitive domain raises significant ethical concerns that must be carefully considered. This essay explores the ethical implications of employing chatbots in mental health, focusing on issues of non-maleficence, beneficence, explicability, and care. Our main ethical question is: should we trust chatbots with our mental health and wellbeing?

Indeed, the recent pandemic has made mental health an urgent global problem. This fact, together with the widespread shortage in qualified human therapists, makes the proposal of chatbot therapists a timely, and perhaps, viable alternative. However, we need to be cautious about hasty implementations of such alternative. For instance, recent news has reported grave incidents involving chatbots-human interactions. For example, (Walker, 2023) reports the death of an eco-anxious man who committed suicide following a prolonged interaction with a chatbot named ELIZA, which encouraged him to put an end to his life to save the planet. Another individual was caught while executing a plan to assassinate the Queen of England, after a chatbot encouraged him to do so (Singleton, Gerken, & McMahon, 2023).

These are only a few recent examples that demonstrate the potential maleficence effect of chatbots on-fragile-individuals. Thus, to be ready to safely deploy such technology, in the context of mental health care, we need to carefully study its potential impact on patients from an ethics standpoint.


Here is my summary:

The article analyzes the ethical considerations around the use of chatbots as mental health therapists, from the perspectives of different stakeholders - bioethicists, therapists, and engineers. It examines four main ethical values:

Non-maleficence: Ensuring chatbots do not cause harm, either accidentally or deliberately. There is agreement that chatbots need rigorous evaluation and regulatory oversight like other medical devices before clinical deployment.

Beneficence: Ensuring chatbots are effective at providing mental health support. There is a need for evidence-based validation of their efficacy, while also considering broader goals like improving quality of life.

Explicability: The need for transparency and accountability around how chatbot algorithms work, so patients can understand the limitations of the technology.

Care: The inability of chatbots to truly empathize, which is a crucial aspect of effective human-based psychotherapy. This raises concerns about preserving patient autonomy and the risk of manipulation.

Overall, the different stakeholders largely agree on the importance of these ethical values, despite coming from different backgrounds. The text notes a surprising level of alignment, even between the more technical engineering perspective and the more humanistic therapist and bioethicist viewpoints. The key challenge seems to be ensuring chatbots can meet the high bar of empathy and care required for effective mental health therapy.

Sunday, April 14, 2024

AI and the need for justification (to the patient)

Muralidharan, A., Savulescu, J. & Schaefer, G.O.
Ethics Inf Technol 26, 16 (2024).

Abstract

This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient’s values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient’s values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not.


Here is my summary:

The article argues that a certain type of AI technology, known as "black box" AI, poses a problem in medicine because it lacks transparency.  This lack of transparency makes it difficult for doctors to explain the AI's recommendations to patients.  In order to make shared decisions about treatment, patients need to understand the reasoning behind those decisions, and how the AI factored in their individual values and preferences.

The article proposes an alternative type of AI, called "Justifiable AI" which would address this problem. Justifiable AI would be designed to make its reasoning process clear, allowing doctors to explain to patients why the AI is recommending a particular course of treatment. This would allow patients to see how the AI's recommendation aligns with their own values, and make informed decisions about their care.

Saturday, April 13, 2024

Human Enhancement and Augmented Reality

Gordon, E.C.
Philos. Technol. 37, 17 (2024).

Abstract

Bioconservative bioethicists (e.g., Kass, 2002, Human Dignity and Bioethics, 297–331, 2008; Sandel, 2007; Fukuyama, 2003) offer various kinds of philosophical arguments against cognitive enhancement—i.e., the use of medicine and technology to make ourselves “better than well” as opposed to merely treating pathologies. Two notable such bioconservative arguments appeal to ideas about (1) the value of achievement, and (2) authenticity. It is shown here that even if these arguments from achievement and authenticity cut ice against specifically pharmacologically driven cognitive enhancement, they do not extend over to an increasingly viable form of technological cognitive enhancement – namely, cognitive enhancement via augmented reality. An important result is that AR-driven cognitive enhancement aimed at boosting performance in certain cognitive tasks might offer an interesting kind of “sweet spot” for proponents of cognitive enhancement, allowing us to pursue many of the goals of enhancement advocates without running into some of the most prominent objections from bioconservative philosophers.


Here is a summary:

The article discusses how Augmented Reality (AR) can be a tool for human enhancement. Traditionally, human enhancement focused on using technology or medicine to directly alter the body or brain. AR, however, offers an alternative method for augmentation by overlaying information and visuals on the real world through devices like glasses or contact lenses. This can improve our abilities in a variety of ways, such as providing hands-free access to information or translating languages in real-time. The article also acknowledges ethical concerns surrounding human enhancement, but argues that AR offers a less controversial path compared to directly modifying the body or brain.

Friday, April 12, 2024

Large language models show human-like content biases in transmission chain experiments

Acerbi, A., & Stubbersfield, J. M. (2023).
PNAS, 120(44), e2313790120.

Abstract

As the use of large language models (LLMs) grows, it is important to examine whether they exhibit biases in their output. Research in cultural evolution, using transmission chain experiments, demonstrates that humans have biases to attend to, remember, and transmit some types of content over others. Here, in five preregistered experiments using material from previous studies with human participants, we use the same, transmission chain-like methodology, and find that the LLM ChatGPT-3 shows biases analogous to humans for content that is gender-stereotype-consistent, social, negative, threat-related, and biologically counterintuitive, over other content. The presence of these biases in LLM output suggests that such content is widespread in its training data and could have consequential downstream effects, by magnifying preexisting human tendencies for cognitively appealing and not necessarily informative, or valuable, content.

Significance

Use of AI in the production of text through Large Language Models (LLMs) is widespread and growing, with potential applications in journalism, copywriting, academia, and other writing tasks. As such, it is important to understand whether text produced or summarized by LLMs exhibits biases. The studies presented here demonstrate that the LLM ChatGPT-3 reflects human biases for certain types of content in its production. The presence of these biases in LLM output has implications for its common use, as it may magnify human tendencies for content which appeals to these biases.


Here are the main points:
  • LLMs display stereotype-consistent biases, just like humans: Similar to people, LLMs were more likely to preserve information confirming stereotypes over information contradicting them.
  • Bias location might differ: Unlike humans, whose biases can shift throughout the retelling process, LLMs primarily showed bias in the first retelling. This suggests their biases stem from their training data rather than a complex cognitive process.
  • Simple summarization may suffice: The first retelling step caused the most content change, implying that even a single summarization by an LLM can reveal its biases. This simplifies the research needed to detect and analyze LLM bias.
  • Prompting for different viewpoints could reduce bias: The study suggests experimenting with different prompts to encourage LLMs to consider broader perspectives and potentially mitigate inherent biases.

Thursday, April 11, 2024

FDA Clears the First Digital Therapeutic for Depression, But Will Payers Cover It?

Frank Vinluan
MedCityNews.com
Originally posted 1 April 24

A software app that modifies behavior through a series of lessons and exercises has received FDA clearance for treating patients with major depressive disorder, making it the first prescription digital therapeutic for this indication.

The product, known as CT-152 during its development by partners Otsuka Pharmaceutical and Click Therapeutics, will be commercialized under the brand name Rejoyn.

Rejoyn is an alternative way to offer cognitive behavioral therapy, a type of talk therapy in which a patient works with a clinician in a series of in-person sessions. In Rejoyn, the cognitive behavioral therapy lessons, exercises, and reminders are digitized. The treatment is intended for use three times weekly for six weeks, though lessons may be revisited for an additional four weeks. The app was initially developed by Click Therapeutics, a startup that develops apps that use exercises and tasks to retrain and rewire the brain. In 2019, Otsuka and Click announced a collaboration in which the Japanese pharma company would fully fund development of the depression app.


Here is a quick summary:

Rejoyn is the first prescription digital therapeutic (PDT) authorized by the FDA for the adjunctive treatment of major depressive disorder (MDD) symptoms in adults. 

Rejoyn is a 6-week remote treatment program that combines clinically-validated cognitive emotional training exercises and brief therapeutic lessons to help enhance cognitive control of emotions. The app aims to improve connections in the brain regions affected by depression, allowing the areas responsible for processing and regulating emotions to work better together and reduce MDD symptoms. 

The FDA clearance for Rejoyn was based on data from a 13-week pivotal clinical trial that compared the app to a sham control app in 386 participants aged 22-64 with MDD who were taking antidepressants. The study found that Rejoyn users showed a statistically significant improvement in depression symptom severity compared to the control group, as measured by clinician-reported and patient-reported scales. No adverse effects were observed during the trial. 

Rejoyn is expected to be available for download on iOS and Android devices in the second half of 2024. It represents a novel, clinically-validated digital therapeutic option that can be used as an adjunct to traditional MDD treatments under the guidance of healthcare providers.

Wednesday, April 10, 2024

Why the world cannot afford the rich

R. G. Wilkinson & K. E. Pickett
Nature.com
Originally published 12 March 24

Here is an excerpt:

Inequality also increases consumerism. Perceived links between wealth and self-worth drive people to buy goods associated with high social status and thus enhance how they appear to others — as US economist Thorstein Veblen set out more than a century ago in his book The Theory of the Leisure Class (1899). Studies show that people who live in more-unequal societies spend more on status goods14.

Our work has shown that the amount spent on advertising as a proportion of gross domestic product is higher in countries with greater inequality. The well-publicized lifestyles of the rich promote standards and ways of living that others seek to emulate, triggering cascades of expenditure for holiday homes, swimming pools, travel, clothes and expensive cars.

Oxfam reports that, on average, each of the richest 1% of people in the world produces 100 times the emissions of the average person in the poorest half of the world’s population15. That is the scale of the injustice. As poorer countries raise their material standards, the rich will have to lower theirs.

Inequality also makes it harder to implement environmental policies. Changes are resisted if people feel that the burden is not being shared fairly. For example, in 2018, the gilets jaunes (yellow vests) protests erupted across France in response to President Emmanuel Macron’s attempt to implement an ‘eco-tax’ on fuel by adding a few percentage points to pump prices. The proposed tax was seen widely as unfair — particularly for the rural poor, for whom diesel and petrol are necessities. By 2019, the government had dropped the idea. Similarly, Brazilian truck drivers protested against rises in fuel tax in 2018, disrupting roads and supply chains.

Do unequal societies perform worse when it comes to the environment, then? Yes. For rich, developed countries for which data were available, we found a strong correlation between levels of equality and a score on an index we created of performance in five environmental areas: air pollution; recycling of waste materials; the carbon emissions of the rich; progress towards the United Nations Sustainable Development Goals; and international cooperation (UN treaties ratified and avoidance of unilateral coercive measures).


The article argues that rising economic inequality is a major threat to the world's well-being. Here are the key points:

The rich are capturing a growing share of wealth: The richest 1% are accumulating wealth much faster than everyone else, and their lifestyles contribute heavily to environmental damage.

Inequality harms everyone: High levels of inequality are linked to social problems like crime, mental health issues, and lower social mobility. It also makes it harder to address environmental challenges because people resist policies seen as unfair.

More equal societies perform better: Countries with a more even distribution of wealth tend to have better social and health outcomes, as well as stronger environmental performance.

Policymakers need to take action: The article proposes progressive taxation, closing tax havens, and encouraging more equitable business practices like employee ownership.

The overall message is that reducing inequality is essential for solving a range of environmental, social, and health problems.

Tuesday, April 9, 2024

Why can’t anyone agree on how dangerous AI will be?

Dylan Matthews
Vox.com
Originally posted 13 March 24

Here is an excerpt:

The paper focuses on disagreement around AI’s potential to either wipe humanity out or cause an “unrecoverable collapse,” in which the human population shrinks to under 1 million for a million or more years, or global GDP falls to under $1 trillion (less than 1 percent of its current value) for a million years or more. At the risk of being crude, I think we can summarize these scenarios as “extinction or, at best, hell on earth.”

There are, of course, a number of other different risks from AI worth worrying about, many of which we already face today.

Existing AI systems sometimes exhibit worrying racial and gender biases; they can be unreliable in ways that cause problems when we rely upon them anyway; they can be used to bad ends, like creating fake news clips to fool the public or making pornography with the faces of unconsenting people.

But these harms, while surely bad, obviously pale in comparison to “losing control of the AIs such that everyone dies.” The researchers chose to focus on the extreme, existential scenarios.

So why do people disagree on the chances of these scenarios coming true? It’s not due to differences in access to information, or a lack of exposure to differing viewpoints. If it were, the adversarial collaboration, which consisted of massive exposure to new information and contrary opinions, would have moved people’s beliefs more dramatically.


Here is my summary:

The article discusses the ongoing debate surrounding the potential dangers of advanced AI, focusing on whether it could lead to catastrophic outcomes for humanity. The author highlights the contrasting views of experts and superforecasters regarding the risks posed by AI, with experts generally more concerned about disaster scenarios. The study conducted by the Forecasting Research Institute aimed to understand the root of these disagreements through an "adversarial collaboration" where both groups engaged in extensive discussions and exposure to new information.

The research identified key issues, termed "cruxes," that influence people's beliefs about AI risks. One significant crux was the potential for AI to autonomously replicate and acquire resources before 2030. Despite the collaborative efforts, the study did not lead to a convergence of opinions. The article delves into the reasons behind these disagreements, emphasizing fundamental worldview disparities and differing perspectives on the long-term development of AI.

Overall, the article provides insights into why individuals hold varying opinions on AI's dangers, highlighting the complexity of predicting future outcomes in this rapidly evolving field.

Monday, April 8, 2024

Delusions shape our reality

Lisa Bortolotti
iai.tv
Originally posted 12 March 24

Here is an excerpt:

But what makes it the case that a delusion disqualifies the speaker from further engagement? When we call a person’s belief “delusional”, we assume that that person’s capacity to exercise agency is compromised. So, we may recognise that the person has a unique perspective on the world, but it won’t seem to us as a valuable perspective. We may realise that the person has concerns, but we won’t think of those concerns as legitimate and worth addressing. We may come to the conviction that, due to the delusional belief, the person is not in a position to affect change or participate in decision making because their grasp on reality is tenuous. If they were simply mistaken about something, we could correct them. If Laura thought that a latte at the local coffee shop costed £2.50 when it costs £3.50, we could show her the price list and set her straight. But her belief that her partner is unfaithful because the lamp post is unlit cannot be corrected that way, because what Laura considers evidence for the claim is not likely to overlap with what we consider evidence for it. When this happens, and we feel that there is no sufficient common ground for a fruitful exchange, we may see Laura as a problem to be fixed or a patient to be diagnosed and treated, as opposed to an agent with a multiplicity of needs and interests, and a person worth interacting with.

I challenge the assumption that delusional beliefs are marks of compromised agency by default and I do so based on two main arguments. First, there is nothing in the way in which delusional beliefs are developed, maintained, or defended that can be legitimately described as a dysfunctional process. Some cognitive biases may help explain why a delusional explanation is preferred to alternative explanations, or why it is not discarded after a challenge. For instance, people who report delusional beliefs often jump to conclusions. Rashid might have the belief that the US government strives to manipulate citizens’ behaviour and concludes that the tornadoes are created for this purpose, without considering arguments against the feasibility of a machine that controls the weather with that precision. Also, people who report delusional beliefs tend to see meaningful connections between independent events—as Laura who takes the lamp post being unlit as evidence for her partner’s unfaithfulness. But these cognitive biases are a common feature of human cognition and not a dysfunction giving rise to a pathology: they tend to be accentuated at stressful times when we may be strongly motivated to come up with a quick causal explanation for a distressing event.


Here is my summary:

The article argues that delusions, though often seen as simply false beliefs, can significantly impact a person's experience of the world. It highlights that delusions can be complex and offer a kind of internal logic, even if it doesn't match objective reality.

Bortolotti also points out that the term "delusion" can be judgmental and may overlook the reasons behind the belief. Delusions can sometimes provide comfort or a sense of control in a confusing situation.

Overall, the article suggests a more nuanced view of delusions, acknowledging their role in shaping a person's reality while still recognizing the importance of distinguishing them from objective reality.

Sunday, April 7, 2024

When Institutions Harm Those Who Depend on Them: A Scoping Review of Institutional Betrayal

Christl, M. E., et al. (2024).
Trauma, violence & abuse
15248380241226627.
Advance online publication.

Abstract

The term institutional betrayal (Smith and Freyd, 2014) builds on the conceptual framework of betrayal trauma theory (see Freyd, 1996) to describe the ways that institutions (e.g., universities, workplaces) fail to take appropriate steps to prevent and/or respond appropriately to interpersonal trauma. A nascent literature has begun to describe individual costs associated with institutional betrayal throughout the United States (U.S.), with implications for public policy and institutional practice. A scoping review was conducted to quantify existing study characteristics and key findings to guide research and practice going forward. Multiple academic databases were searched for keywords (i.e., "institutional betrayal" and "organizational betrayal"). Thirty-seven articles met inclusion criteria (i.e., peer-reviewed empirical studies of institutional betrayal) and were included in analyses. Results identified research approaches, populations and settings, and predictor and outcome variables frequently studied in relation to institutional betrayal. This scoping review describes a strong foundation of published studies and provides recommendations for future research, including longitudinal research with diverse individuals across diverse institutional settings. The growing evidence for action has broad implications for research-informed policy and institutional practice.

Here is my summary:

A growing body of research examines institutional betrayal, the harm institutions cause people who depend on them. This research suggests institutional betrayal is linked to mental and physical health problems, absenteeism from work, and a distrust of institutions. A common tool to measure institutional betrayal is the Institutional Betrayal Questionnaire (IBQ). Researchers are calling for more studies on institutional betrayal among young people and in settings like K-12 schools and workplaces. Additionally, more research is needed on how institutions respond to reports of betrayal and how to prevent it from happening in the first place. Finally, future research should focus on people from minority groups, as they may be more vulnerable to institutional betrayal.

Saturday, April 6, 2024

LSD-Based Medication for GAD Receives FDA Breakthrough Status

Megan Brooks
Medscape.com
Originally posted March 08, 2024

The US Food and Drug Administration (FDA) has granted breakthrough designation to an LSD-based treatment for generalized anxiety disorder (GAD) based on promising topline data from a phase 2b clinical trial. Mind Medicine (MindMed) Inc is developing the treatment — MM120 (lysergide d-tartrate).

In a news release the company reports that a single oral dose of MM120 met its key secondary endpoint, maintaining "clinically and statistically significant" reductions in Hamilton Anxiety Scale (HAM-A) score, compared with placebo, at 12 weeks with a 65% clinical response rate and 48% clinical remission rate.

The company previously announced statistically significant improvements on the HAM-A compared with placebo at 4 weeks, which was the trial's primary endpoint.

"I've conducted clinical research studies in psychiatry for over two decades and have seen studies of many drugs under development for the treatment of anxiety. That MM120 exhibited rapid and robust efficacy, solidly sustained for 12 weeks after a single dose, is truly remarkable," study investigator David Feifel, MD, PhD, professor emeritus of psychiatry at the University of California, San Diego, and director of the Kadima Neuropsychiatry Institute in La Jolla, California, said in the news release.


Here is some information from the Press Release from Mind Medicine.

About MM120

Lysergide is a synthetic ergotamine belonging to the group of classic, or serotonergic, psychedelics, which acts as a partial agonist at human serotonin-2A (5-hydroxytryptamine-2A [5-HT2A]) receptors. MindMed is developing MM120 (lysergide D-tartrate), the tartrate salt form of lysergide, for GAD and is exploring its potential applications in other serious brain health disorders.

About MindMed

MindMed is a clinical stage biopharmaceutical company developing novel product candidates to treat brain health disorders. Our mission is to be the global leader in the development and delivery of treatments that unlock new opportunities to improve patient outcomes. We are developing a pipeline of innovative product candidates, with and without acute perceptual effects, targeting neurotransmitter pathways that play key roles in brain health disorders.

MindMed trades on NASDAQ under the symbol MNMD and on the Cboe Canada (formerly known as the NEO Exchange, Inc.) under the symbol MMED.

Friday, April 5, 2024

Ageism in health care is more common than you might think, and it can harm people

Ashley Milne-Tyte
npr.org
Originally posted 7 March 24

A recent study found that older people spend an average of 21 days a year on medical appointments. Kathleen Hayes can believe it.

Hayes lives in Chicago and has spent a lot of time lately taking her parents, who are both in their 80s, to doctor's appointments. Her dad has Parkinson's, and her mom has had a difficult recovery from a bad bout of Covid-19. As she's sat in, Hayes has noticed some health care workers talk to her parents at top volume, to the point, she says, "that my father said to one, 'I'm not deaf, you don't have to yell.'"

In addition, while some doctors and nurses address her parents directly, others keep looking at Hayes herself.

"Their gaze is on me so long that it starts to feel like we're talking around my parents," says Hayes, who lives a few hours north of her parents. "I've had to emphasize, 'I don't want to speak for my mother. Please ask my mother that question.'"

Researchers and geriatricians say that instances like these constitute ageism – discrimination based on a person's age – and it is surprisingly common in health care settings. It can lead to both overtreatment and undertreatment of older adults, says Dr. Louise Aronson, a geriatrician and professor of geriatrics at the University of California, San Francisco.

"We all see older people differently. Ageism is a cross-cultural reality," Aronson says.


Here is my summary:

This article and other research point to a concerning prevalence of ageism in healthcare settings. This bias can take the form of either overtreatment or undertreatment of older adults.

Negative stereotypes: Doctors may hold assumptions about older adults being less willing or able to handle aggressive treatments, leading to missed opportunities for care.

Communication issues: Sometimes healthcare providers speak to adult children instead of the older person themselves, disregarding their autonomy.

These biases are linked to poorer health outcomes and can even shorten lifespans.  The article cites a study suggesting that ageism costs the healthcare system billions of dollars annually.  There are positive steps that can be taken, such as anti-bias training for healthcare workers.

Thursday, April 4, 2024

Ready or not, AI chatbots are here to help with Gen Z’s mental health struggles

Matthew Perrone
AP.com
Originally posted 23 March 24

Here is an excerpt:

Earkick is one of hundreds of free apps that are being pitched to address a crisis in mental health among teens and young adults. Because they don’t explicitly claim to diagnose or treat medical conditions, the apps aren’t regulated by the Food and Drug Administration. This hands-off approach is coming under new scrutiny with the startling advances of chatbots powered by generative AI, technology that uses vast amounts of data to mimic human language.

The industry argument is simple: Chatbots are free, available 24/7 and don’t come with the stigma that keeps some people away from therapy.

But there’s limited data that they actually improve mental health. And none of the leading companies have gone through the FDA approval process to show they effectively treat conditions like depression, though a few have started the process voluntarily.

“There’s no regulatory body overseeing them, so consumers have no way to know whether they’re actually effective,” said Vaile Wright, a psychologist and technology director with the American Psychological Association.

Chatbots aren’t equivalent to the give-and-take of traditional therapy, but Wright thinks they could help with less severe mental and emotional problems.

Earkick’s website states that the app does not “provide any form of medical care, medical opinion, diagnosis or treatment.”

Some health lawyers say such disclaimers aren’t enough.


Here is my summary:

AI chatbots can provide personalized, 24/7 mental health support and guidance to users through convenient mobile apps. They use natural language processing and machine learning to simulate human conversation and tailor responses to individual needs.

 This can be especially beneficial for those who face barriers to accessing traditional in-person therapy, such as cost, location, or stigma.

Research has shown that AI chatbots can be effective in reducing the severity of mental health issues like anxiety, depression, and stress for diverse populations.  They can deliver evidence-based interventions like cognitive behavioral therapy and promote positive psychology.  Some well-known examples include Wysa, Woebot, Replika, Youper, and Tess.

However, there are also ethical concerns around the use of AI chatbots for mental health. There are risks of providing inadequate or even harmful support if the chatbot cannot fully understand the user's needs or respond empathetically. Algorithmic bias in the training data could also lead to discriminatory advice. It's crucial that users understand the limitations of the therapeutic relationship with an AI chatbot versus a human therapist.

Overall, AI chatbots have significant potential to expand access to mental health support, but must be developed and deployed responsibly with strong safeguards to protect user wellbeing. Continued research and oversight will be needed to ensure these tools are used effectively and ethically.

Wednesday, April 3, 2024

Perceptions of Falling Behind “Most White People”: Within-Group Status Comparisons Predict Fewer Positive Emotions and Worse Health Over Time Among White (but Not Black) Americans

Caluori, N., Cooley, E., et al. (2024).
Psychological Science, 35(2), 175-190.
https://doi.org/10.1177/09567976231221546

Abstract

Despite the persistence of anti-Black racism, White Americans report feeling worse off than Black Americans. We suggest that some White Americans may report low well-being despite high group-level status because of perceptions that they are falling behind their in-group. Using census-based quota sampling, we measured status comparisons and health among Black (N = 452, Wave 1) and White (N = 439, Wave 1) American adults over a period of 6 to 7 weeks. We found that Black and White Americans tended to make status comparisons within their own racial groups and that most Black participants felt better off than their racial group, whereas most White participants felt worse off than their racial group. Moreover, we found that White Americans’ perceptions of falling behind “most White people” predicted fewer positive emotions at a subsequent time, which predicted worse sleep quality and depressive symptoms in the future. Subjective within-group status did not have the same consequences among Black participants.


Here is my succinct summary:

Despite their high group status, many White Americans experience poor well-being due to the perception that they are lagging behind their in-group. In contrast, Black Americans feel relatively better off within their racial group, while White Americans feel comparatively worse off within theirs.

Tuesday, April 2, 2024

The Puzzle of Evaluating Moral Cognition in Artificial Agents

Reinecke, M. G., Mao, Y., et al. (2023).
Cognitive Science, 47(8).

Abstract

In developing artificial intelligence (AI), researchers often benchmark against human performance as a measure of progress. Is this kind of comparison possible for moral cognition? Given that human moral judgment often hinges on intangible properties like “intention” which may have no natural analog in artificial agents, it may prove difficult to design a “like-for-like” comparison between the moral behavior of artificial and human agents. What would a measure of moral behavior for both humans and AI look like? We unravel the complexity of this question by discussing examples within reinforcement learning and generative AI, and we examine how the puzzle of evaluating artificial agents' moral cognition remains open for further investigation within cognitive science.

The link to the article is the hyperlink above.

Here is my summary:

This article delves into the challenges associated with assessing the moral decision-making capabilities of artificial intelligence systems. It explores the complexities of imbuing AI with ethical reasoning and the difficulties in evaluating their moral cognition. The article discusses the need for robust frameworks and methodologies to effectively gauge the ethical behavior of AI, highlighting the intricate nature of integrating morality into machine learning algorithms. Overall, it emphasizes the critical importance of developing reliable methods to evaluate the moral reasoning of artificial agents in order to ensure their responsible and ethical deployment in various domains.

Monday, April 1, 2024

Daniel Kahneman, pioneering behavioral psychologist, Nobel laureate and ‘giant in the field,’ dies at 90

Jaime Saxon
Office of Communications - Princeton
Originally released 28 March 24

Daniel Kahneman, the Eugene Higgins Professor of Psychology, Emeritus, professor of psychology and public affairs, emeritus, and a Nobel laureate in economics whose groundbreaking behavioral science research changed our understanding of how people think and make decisions, died on March 27. He was 90.

Kahneman joined the Princeton University faculty in 1993, following appointments at Hebrew University, the University of British Columbia and the University of California–Berkeley, and transferred to emeritus status in 2007.

“Danny Kahneman changed how we understand rationality and its limits,” said Princeton President Christopher L. Eisgruber. “His scholarship pushed the frontiers of knowledge, inspired generations of students, and influenced leaders and thinkers throughout the world. We are fortunate that he made Princeton his home for so much of his career, and we will miss him greatly.”

In collaboration with his colleague and friend of nearly 30 years, the late Amos Tversky of Stanford University, Kahneman applied cognitive psychology to economic analysis, laying the foundation for a new field of research — behavioral economics — and earning Kahneman the Nobel Prize in Economics in 2002. Kahneman and Tversky’s insights on human judgment have influenced a wide range of disciplines, including economics, finance, medicine, law, politics and policy.

The Nobel citation commended Kahneman “for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty.”

“His work has inspired a new generation of researchers in economics and finance to enrich economic theory using insights from cognitive psychology into intrinsic human motivation,” the citation said. Kahneman shared the Nobel, formally the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, with American economist Vernon L. Smith.


Here is my personal reflection:

Daniel Kahneman, a giant in psychology and economics, passed away recently. He revolutionized our understanding of human decision-making, revealing the biases and shortcuts that shape our choices. Through his work, he not only improved economic models but also empowered individuals to make more informed and rational decisions. His legacy will continue to influence fields far beyond his own.  May his memory be a blessing.

Sunday, March 31, 2024

Lifetime Suicide Attempts in Otherwise Psychiatrically Healthy Individuals

Oquendo, M. A., et al. (2024).
JAMA psychiatry, e235672.
Advance online publication.
https://doi.org/10.1001/jamapsychiatry.2023.5672

Abstract

Importance: Not all people who die by suicide have a psychiatric diagnosis; yet, little is known about the percentage and demographics of individuals with lifetime suicide attempts who are apparently psychiatrically healthy. If such suicide attempts are common, there are implications for suicide risk screening, research, policy, and nosology.

Objective: To estimate the percentage of people with lifetime suicide attempts whose first attempt occurred prior to onset of any psychiatric disorder.

Design, setting, and participants: This cross-sectional study used data from the US National Epidemiologic Study of Addictions and Related Conditions III (NESARC-III), a cross-sectional face-to-face survey conducted with a nationally representative sample of the US civilian noninstitutionalized population, and included persons with lifetime suicide attempts who were aged 20 to 65 years at survey administration (April 2012 to June 2013). Data from the NESARC, Wave 2 survey from August 2004 to September 2005 were used for replication. Analyses were performed from April to August 2023.

Exposure: Lifetime suicide attempts.

Main outcomes and measures: The main outcome was presence or absence of a psychiatric disorder before the first lifetime suicide attempt. Among persons with lifetime suicide attempts, the percentage and 95% CI of those whose first suicide attempt occurred before the onset of any apparent psychiatric disorders was calculated, weighted by NESARC sampling and nonresponse weights. Separate analyses were performed for males, females, and 3 age groups (20 to <35, 35-50, and >50 to 65 years).

Conclusions and relevance: In this study, an estimated 19.6% of individuals who attempted suicide did so despite not meeting criteria for an antecedent psychiatric disorder. This finding challenges clinical notions of who is at risk for suicidal behavior and raises questions about the safety of limiting suicide risk screening to psychiatric populations.

Saturday, March 30, 2024

How digital media drive affective polarization through partisan sorting

Törnberg, P. (2022).
PNAS of the United States of America,
119(42).

Abstract

Politics has in recent decades entered an era of intense polarization. Explanations have implicated digital media, with the so-called echo chamber remaining a dominant causal hypothesis despite growing challenge by empirical evidence. This paper suggests that this mounting evidence provides not only reason to reject the echo chamber hypothesis but also the foundation for an alternative causal mechanism. To propose such a mechanism, the paper draws on the literatures on affective polarization, digital media, and opinion dynamics. From the affective polarization literature, we follow the move from seeing polarization as diverging issue positions to rooted in sorting: an alignment of differences which is effectively dividing the electorate into two increasingly homogeneous megaparties. To explain the rise in sorting, the paper draws on opinion dynamics and digital media research to present a model which essentially turns the echo chamber on its head: it is not isolation from opposing views that drives polarization but precisely the fact that digital media bring us to interact outside our local bubble. When individuals interact locally, the outcome is a stable plural patchwork of cross-cutting conflicts. By encouraging nonlocal interaction, digital media drive an alignment of conflicts along partisan lines, thus effacing the counterbalancing effects of local heterogeneity. The result is polarization, even if individual interaction leads to convergence. The model thus suggests that digital media polarize through partisan sorting, creating a maelstrom in which more and more identities, beliefs, and cultural preferences become drawn into an all-encompassing societal division.

Significance

Recent years have seen a rapid rise of affective polarization, characterized by intense negative feelings between partisan groups. This represents a severe societal risk, threatening democratic institutions and constituting a metacrisis, reducing our capacity to respond to pressing societal challenges such as climate change, pandemics, or rising inequality. This paper provides a causal mechanism to explain this rise in polarization, by identifying how digital media may drive a sorting of differences, which has been linked to a breakdown of social cohesion and rising affective polarization. By outlining a potential causal link between digital media and affective polarization, the paper suggests ways of designing digital media so as to reduce their negative consequences.

Friday, March 29, 2024

Spheres of immanent justice: Sacred violations evoke expectations of cosmic punishment, irrespective of societal punishment

Goyal, N., Savani, K., & Morris, M. W. (2023).
Journal of Experimental Social Psychology, 106, 104458.

Abstract

People like to believe that misdeeds do not escape punishment. However, do people expect that some kinds of sins are particularly punished by “the universe,” not just by society? Five experiments (N = 1184) found that people expected more cosmic punishment for transgressions of sacred rules than transgressions of secular rules or conventions (Studies 1–3) and that this “sacred effect” holds even after violations have been punished by society (Study 4a-4b). In Study 1, participants expected more cosmic punishment for a person who had sex with a cousin (sacred taboo) than sex with a subordinate (secular harm) or sex with a family associate (convention violation). In Study 2, people expected more cosmic punishment for eating a bald eagle (sacred violation) than eating an endangered puffin (secular violation) or a farm-raised emu (convention violation). In Study 3, Hindus expected more cosmic punishment for entering a temple wearing shoes (sacred violation) rather than entering a temple wearing revealing clothing (secular violation) or sunglasses (convention violation). In all three studies, this “sacred effect” was mediated by the perceived blasphemy rather than the perceived harm, immorality, or unusualness of the violations. Study 4a measured both expectations of societal and cosmic punishment, and Study 4b measured expectations of cosmic punishment after each violation had received societal punishment. Even after violations received societal punishment, people expected more cosmic punishment for sacred violations than secular or convention violations. Results are discussed in relation to models of immanent justice and just world beliefs.


This is an article about people’s expectations of punishment for violating different social norms. It discusses the concept of immanent justice, which is the belief that people get what they deserve. The authors propose that people expect harsher cosmic punishment for violations of sacred norms, compared to secular norms or social conventions. They conducted five studies to test this hypothesis. In the studies, participants read stories about people who violated different types of norms, and then rated how likely they were to experience various punishments. The results supported the authors’ hypothesis: people expected harsher cosmic punishment for sacred norm violations, even after the violations had been punished by society. This suggests that people believe in a kind of cosmic justice that goes beyond human punishment.

Thursday, March 28, 2024

Antagonistic AI

A. Cai, I. Arawjo, E. L. Glassman
arXiv:2402.07350
Originally submitted 12 Feb 24

The vast majority of discourse around AI development assumes that subservient, "moral" models aligned with "human values" are universally beneficial -- in short, that good AI is sycophantic AI. We explore the shadow of the sycophantic paradigm, a design space we term antagonistic AI: AI systems that are disagreeable, rude, interrupting, confrontational, challenging, etc. -- embedding opposite behaviors or values. Far from being "bad" or "immoral," we consider whether antagonistic AI systems may sometimes have benefits to users, such as forcing users to confront their assumptions, build resilience, or develop healthier relational boundaries. Drawing from formative explorations and a speculative design workshop where participants designed fictional AI technologies that employ antagonism, we lay out a design space for antagonistic AI, articulating potential benefits, design techniques, and methods of embedding antagonistic elements into user experience. Finally, we discuss the many ethical challenges of this space and identify three dimensions for the responsible design of antagonistic AI -- consent, context, and framing.


Here is my summary:

This article proposes a thought-provoking concept: designing AI systems that intentionally challenge and disagree with users. It argues against the dominant view of AI as subservient and aligned with human values, instead exploring the potential benefits of "antagonistic AI" in stimulating critical thinking and challenging assumptions. While acknowledging the ethical concerns and proposing responsible design principles, the article could benefit from a deeper discussion of potential harms, concrete examples of how such AI might function, and how it would be received by users. Overall, "Antagonistic AI" is a valuable contribution that prompts further exploration and discussion on the responsible development and societal implications of such AI systems.

Wednesday, March 27, 2024

Whitehouse floats congressional intervention for SCOTUS fact-finding adventurism

Benjamin S. Weiss
Originally posted 21 Feb 24

One of the Senate’s most prominent Supreme Court critics on Wednesday floated the idea that Congress could step in to block the high court from what he characterized as efforts to manipulate facts in cases that benefit Republican special interests.

Under the leadership of Chief Justice John Roberts, the Supreme Court has had a “near-uniform pattern of handing down rulings benefitting identifiable Republican donor interests” on a smattering of issues including reproductive rights, immigration and health care, wrote Rhode Island Senator Sheldon Whitehouse in an article published in the Ohio State Law Journal.

The Roberts court has presided over more than 80 5-4 rulings on issues advancing GOP policy priorities with few exceptions, he said, contending that the high court’s current conservative supermajority has pursued “results-oriented jurisprudence” for Republican political operatives.

A pattern of “extra-record fact finding” has contributed to these decisions, Whitehouse said — arguing that justices have repeatedly and improperly undertaken efforts to manipulate the facts of cases in which a lower court, or Congress, has already established a factual record.

Such malfeasance means taking the Supreme Court’s decisions on faith “is no longer automatically justified,” he said. “Too many decisions are delivered goods, not judicial work.”


Here is a summary:

Senator Sheldon Whitehouse suggests potential congressional intervention to address concerns about the Supreme Court's increasing reliance on "extra-record fact-finding" in recent rulings. He argues that this practice, where justices seemingly manipulate or ignore established facts, undermines the Court's credibility.

Whitehouse argues that a pattern of disregard for congressional findings and longstanding appellate court norms is evident in several recent Supreme Court decisions. He believes this approach benefits certain special interests and erodes trust in the Court's impartiality.

The article highlights the need for a potential response from Congress to curb this perceived judicial overreach by the Supreme Court.

Tuesday, March 26, 2024

Why the largest transgender survey ever could be a powerful rebuke to myths, misinformation

Susan Miller
USAToday.com
Originally posted 23 Feb 24

Here is an excerpt:

Laura Hoge, a clinical social worker in New Jersey who works with transgender people and their families, said the survey results underscore what she sees in her daily practice: that lives improve when access to something as basic as gender-affirming care is not restricted.

“I see children who come here sometimes not able to go to school or are completely distanced from their friends,” she said. “And when they have access to care, they can go from not going to school to trying out for their school play.”

Every time misinformation about transgender people surfaces, Hoge says she is flooded with phone calls.

The survey now gives real-world data on the lived experiences of transgender people and how their lives are flourishing, she said. “I can tell you that when I talk to families I am able to say to them: This is what other people in your child’s situation or in your situation are saying.”

Gender-affirming care has been a target of state bills

Gender-affirming care, which can involve everything from talk sessions to hormone therapy, in many ways has been ground zero in recent legislative debates over the rights of transgender people.

A poll by the Trevor Project, which provides crisis and suicide prevention services to LGBTQ+ people under 25, found that 85% of trans and nonbinary youths say even the debates about these laws have negatively impacted their mental health.

In January, the Ohio Senate overrode the governor’s veto of legislation that restricted medical care for transgender young people.

The bill prohibits doctors from prescribing hormones, puberty blockers, or gender reassignment surgery before patients turn 18 and requires mental health providers to get parental permission to diagnose and treat gender dysphoria.


Here are my thoughts:

A landmark study is underway: the largest survey of transgender individuals in the United States. This comprehensive data collection holds the potential to be a powerful weapon against harmful myths and misinformation surrounding the transgender community. By providing a clear picture of their experiences, the survey can challenge misconceptions, inform policy, and ultimately improve the lives of transgender individuals. This data-driven approach has the potential to foster greater understanding and acceptance, paving the way for a more inclusive society.