Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Informed Consent. Show all posts
Showing posts with label Informed Consent. Show all posts

Sunday, June 1, 2025

Reconsidering Informed Consent for Trans-Identified Children, Adolescents, and Young Adults

Levine, S. B., Abbruzzese, E., & Mason, J. W. (2022).
Journal of Sex & Marital Therapy, 48(7), 706–727.

Abstract

In less than a decade, the western world has witnessed an unprecedented rise in the numbers of children and adolescents seeking gender transition. Despite the precedent of years of gender-affirmative care, the social, medical and surgical interventions are still based on very low-quality evidence. The many risks of these interventions, including medicalizing a temporary adolescent identity, have come into a clearer focus through an awareness of detransitioners. The risks of gender-affirmative care are ethically managed through a properly conducted informed consent process. Its elements—deliberate sharing of the hoped-for benefits, known risks and long-term outcomes, and alternative treatments—must be delivered in a manner that promotes comprehension. The process is limited by: erroneous professional assumptions; poor quality of the initial evaluations; and inaccurate and incomplete information shared with patients and their parents. We discuss data on suicide and present the limitations of the Dutch studies that have been the basis for interventions. Beliefs about gender-affirmative care need to be separated from the established facts. A proper informed consent process can both prepare parents and patients for the difficult choices that they must make and can ease professionals’ ethical tensions. Even when properly accomplished, however, some clinical circumstances exist that remain quite uncertain.

Here are some thoughts:

The article critiques the prevailing standards for obtaining informed consent in the context of gender-affirming medical interventions for minors and young adults. It argues that current practices often fail to adequately ensure that patients—and in many cases, their guardians—fully understand the long-term risks, uncertainties, and implications of puberty blockers, cross-sex hormones, and surgeries. The authors contend that the developmental immaturity of children and adolescents, combined with social pressures and sometimes incomplete psychological evaluations, undermines the ethical validity of consent. They advocate for a more cautious, evidence-informed, and ethically rigorous approach that prioritizes psychological exploration and long-term outcomes over immediate affirmation and medical intervention.

Friday, May 30, 2025

How Does Therapy Harm? A Model of Adverse Process Using Task Analysis in the Meta-Synthesis of Service Users' Experience

Curran, J., Parry, G. D.,  et al. (2019).
Frontiers in Psychology, 10.

Abstract

Background: Despite repeated discussion of treatment safety, there remains little quantitative research directly addressing the potential of therapy to harm. In contrast, there are numerous sources of qualitative evidence on clients' negative experience of psychotherapy, which they report as harmful.

Objective: To derive a model of process factors potentially leading to negative or harmful effects of therapy, from the clients' perspective, based on a systematic narrative synthesis of evidence on negative experiences and effects of psychotherapy from (a) qualitative research findings and (b) participants' testimony.

Method: We adapted Greenberg (2007) task analysis as a discovery-oriented method for the systematic synthesis of qualitative research and service user testimony. A rational model of adverse processes in psychotherapy was empirically refined in two separate analyses, which were then compared and incorporated into a rational-empirical model. This was then validated against an independent qualitative study of negative effects.

Results: Over 90% of the themes in the rational-empirical model were supported in the validation study. Contextual issues, such as lack of cultural validity and therapy options together with unmet client expectations fed into negative therapeutic processes (e.g., unresolved alliance ruptures). These involved a range of unhelpful therapist behaviors (e.g., rigidity, over-control, lack of knowledge) associated with clients feeling disempowered, silenced, or devalued. These were coupled with issues of power and blame.

Conclusions: Task analysis can be adapted to extract meaning from large quantities of qualitative data, in different formats. The service user perspective reveals there are potentially harmful factors at each stage of the therapy journey which require remedial action. Implications of these findings for practice improvement are discussed.

Here are some thoughts:

The article offers important insights for psychologists into the often-overlooked negative impacts of psychotherapy. It emphasizes that, while therapy generally leads to positive outcomes, it can sometimes result in unintended harm such as increased emotional distress, symptom deterioration, or damage to self-concept and relationships. These adverse effects often arise from ruptures in the therapeutic alliance, misattunement, or a lack of responsiveness to clients’ feedback. The study highlights the importance of maintaining a strong, collaborative therapeutic relationship and recommends that therapists actively seek client input throughout the process. Regular supervision and training are also essential for helping clinicians recognize and address early signs of harm. Informed consent should include discussion of potential risks, and routine outcome monitoring can serve as an early detection system for negative therapy responses. Ultimately, this research underscores the ethical responsibility of psychologists to remain vigilant, self-reflective, and client-centered in order to prevent harm and ensure therapy remains a safe and effective intervention.

Wednesday, May 21, 2025

Optimized Informed Consent for Psychotherapy: Protocol for a Randomized Controlled Trial

Gerke, L. et al. (2022).
JMIR Research Protocols, 11(9), e39843.

Abstract
Background:
Informed consent is a legal and ethical prerequisite for psychotherapy. However, in clinical practice, consistent strategies to obtain informed consent are scarce. Inconsistencies exist regarding the overall validity of informed consent for psychotherapy as well as the disclosure of potential mechanisms and negative effects, the latter posing a moral dilemma between patient autonomy and nonmaleficence.

Objective:
This protocol describes a randomized controlled web-based trial aiming to investigate the efficacy of a one-session optimized informed consent consultation.

Methods:
The optimized informed consent consultation was developed to provide information on the setting, efficacy, mechanisms, and negative effects via expectation management and shared decision-making techniques. A total of 122 participants with an indication for psychotherapy will be recruited. Participants will take part in a baseline assessment, including a structured clinical interview for Diagnostic and Statistical Manual of Mental Disorders-fifth edition (DSM-5) disorders. Eligible participants will be randomly assigned either to a control group receiving an information brochure about psychotherapy as treatment as usual (n=61) or to an intervention group receiving treatment as usual and the optimized informed consent consultation (n=61). Potential treatment effects will be measured after the treatment via interview and patient self-report and at 2 weeks and 3 months follow-up via web-based questionnaires. Treatment expectation is the primary outcome. Secondary outcomes include the capacity to consent, decisional conflict, autonomous treatment motivation, adherence intention, and side-effect expectations.

Results:
This trial received a positive ethics vote by the local ethics committee of the Center for Psychosocial Medicine, University-Medical Center Hamburg-Eppendorf, Hamburg, Germany on April 1, 2021, and was prospectively registered on June 17, 2021. The first participant was enrolled in the study on August 5, 2021. We expect to complete data collection in December 2022. After data analysis within the first quarter of 2023, the results will be submitted for publication in peer-reviewed journals in summer 2023.

Conclusions:
If effective, the optimized informed consent consultation might not only constitute an innovative clinical tool to meet the ethical and legal obligations of informed consent but also strengthen the contributing factors of psychotherapy outcome, while minimizing nocebo effects and fostering shared decision-making.

Here are some thoughts:

This research study investigated an optimized informed consent process in psychotherapy. Recognizing inconsistencies in standard practices, the study tested an enhanced consultation method designed to improve patients' understanding of treatment, manage their expectations, and promote shared decision-making. By comparing this enhanced approach to standard practice with a cohort of 122 participants, the researchers aimed to demonstrate the benefits of a more comprehensive and collaborative informed consent process in fostering positive treatment expectations and related outcomes. The findings were anticipated to provide evidence for a more effective and ethical approach to initiating psychotherapy.

Tuesday, February 18, 2025

Pulling Out the Rug on Informed Consent — New Legal Threats to Clinicians and Patients

Underhill, K., & Nelson, K. M. (2025).
New England Journal of Medicine.

In recent years, state legislators in large portions of the United States have devised and enacted new legal strategies to limit access to health care for transgender people.1 To date, 26 states have enacted outright bans on gender-affirming care, which thus far apply only to minors. Other state laws create financial or procedural obstacles to this type of care, such as bans on insurance coverage, requirements to obtain opinions from multiple clinicians, or consent protocols that are stricter than thosefor other health care.

These laws target clinicians who provide gender-affirming care, but all clinicians — in every jurisdiction and specialty — should take note of the intrusive legal actions that are emerging in the regulation of health care for transgender people. Like the development of restrictive abortion laws, new legal tactics for attacking gender-affirming care are likely to guide legislative opposition to other politically contested
medical interventions. Here we consider one particular legal strategy that, if more widely adopted, could
challenge the legal infrastructure underlying U.S. health care.

The article is paywalled. :(

The author was kind and sent a copy to me.

Here are some thoughts.

The article discusses the increasing legal strategies employed by state legislators to restrict access to healthcare for transgender people, particularly minors. It focuses on a new legal technique in Utah that allows patients who received "hormonal transgender treatment" or surgery on "sex characteristics" as minors to retroactively revoke their consent until the age of 25, potentially exposing clinicians to legal claims. This law challenges the core of the clinician-patient relationship and the legal infrastructure of U.S. healthcare by undermining the principle of informed consent.

The authors argue that Utah's law places an unreasonable burden on clinicians, extending beyond gender-affirming care and potentially deterring them from providing necessary medical services to minors. They express concern that this legal strategy could spread to other states and be applied to other politically contested medical interventions, such as contraception or vaccination. The authors conclude that allowing patients to withdraw consent retroactively threatens the foundation of the U.S. health care system, as it undermines clinicians' ability to rely on informed consent at the time of care and could destabilize access to various healthcare services.

Saturday, February 1, 2025

Augmenting research consent: Should large language models (LLMs) be used for informed consent to clinical research?

Allen, J. W., et al. (2024).
Research Ethics, in press.

Abstract

The integration of artificial intelligence (AI), particularly large language models (LLMs) like OpenAI’s ChatGPT, into clinical research could significantly enhance the informed consent process. This paper critically examines the ethical implications of employing LLMs to facilitate consent in clinical research. LLMs could offer considerable benefits, such as improving participant understanding and engagement, broadening participants’ access to the relevant information for informed consent, and increasing the efficiency of consent procedures. However, these theoretical advantages are accompanied by ethical risks, including the potential for misinformation, coercion, and challenges in accountability. Given the complex nature of consent in clinical research, which involves both written documentation (in the form of participant information sheets and informed consent forms) and in-person conversations with a researcher, the use of LLMs raises significant concerns about the adequacy of existing regulatory frameworks. Institutional Review Boards (IRBs) will need to consider substantial reforms to accommodate the integration of LLM-based consent processes. We explore five potential models for LLM implementation, ranging from supplementary roles to complete replacements of current consent processes, and offer recommendations for researchers and IRBs to navigate the ethical landscape. Thus, we aim to provide practical recommendations to facilitate the ethical introduction of LLM-based consent in research settings by considering factors such as participant understanding, information accuracy, human oversight and types of LLM applications in clinical research consent.


Here are some thoughts:

This paper examines the ethical implications of using large language models (LLMs) for informed consent in clinical research. While LLMs offer potential benefits, including personalized information, increased participant engagement, and improved efficiency, they also present risks related to accuracy, manipulation, and accountability. The authors explore five potential models for LLM implementation in consent processes, ranging from supplementary roles to complete replacements of current methods. Ultimately, they propose a hybrid approach that combines traditional consent methods with LLM-based interactions to maximize participant autonomy while maintaining ethical safeguards.

Tuesday, December 10, 2024

Principles of Clinical Ethics and Their Application to Practice

Varkey, B. (2020).
Medical Principles and Practice,
30(1), 17–28.
https://doi.org/10.1159/000509119

Abstract

An overview of ethics and clinical ethics is presented in this review. The 4 main ethical principles, that is beneficence, nonmaleficence, autonomy, and justice, are defined and explained. Informed consent, truth-telling, and confidentiality spring from the principle of autonomy, and each of them is discussed. In patient care situations, not infrequently, there are conflicts between ethical principles (especially between beneficence and autonomy). A four-pronged systematic approach to ethical problem-solving and several illustrative cases of conflicts are presented. Comments following the cases highlight the ethical principles involved and clarify the resolution of these conflicts. A model for patient care, with caring as its central element, that integrates ethical aspects (intertwined with professionalism) with clinical and technical expertise desired of a physician is illustrated.

Highlights of the Study
  • Main principles of ethics, that is beneficence, nonmaleficence, autonomy, and justice, are discussed.
  • Autonomy is the basis for informed consent, truth-telling, and confidentiality
  • A model to resolve conflicts when ethical principles collide is presented
  • Cases that highlight ethical issues and their resolution are presented
  • A patient care model that integrates ethics, professionalism, and cognitive and technical expertise is shown.

Here are some thoughts: 

This article explores the ethical principles of clinical medicine, focusing on four core principles: beneficence, nonmaleficence, autonomy, and justice. The article defines and explains each principle, using numerous illustrative cases to demonstrate how these principles might conflict in practice. Finally, the article concludes by discussing the importance of professionalism in clinical practice, emphasizing caring as the central element of the doctor-patient relationship.

Friday, December 6, 2024

Should we put pig organs in humans? We asked an ethicist.

Mandy Nguyen
vox.com
Originally posted 30 NOV 24

In 2022, surgeons transplanted the first genetically engineered pig heart into a human. Fifty-seven-year-old David Bennett, a patient with heart failure, survived almost two months with a pig heart beating in his chest, one of five people who have received pig organs as a part of an experimental procedure called xenotransplantation — the transplanting of living cells, tissues, or organs from one species to another.

Some scientists view these pig organs transplants as potentially lifesaving for many like Bennett.

In the US alone, more than 100,000 people are waiting for an organ transplant, and almost 20 people die every day because they can’t get one in time. But a major challenge remains in making xenotransplantation work: scientists haven’t figured out how to get a human body to accept a pig organ for very long. None of the five patients who received these pig organs have survived beyond two months, though researchers believe they’re making progress toward overcoming rejection and eventually moving to clinical trials.

This push to make pig organs viable for humans also comes with enormous ethical implications — from concerns surrounding the use of humans in an experimental procedure that they’re highly unlikely to survive, to the impacts on animals who are supplying the organs themselves. At first glance, the pursuit can feel like hubris. I wanted to better understand these questions, so I spoke with bioethicist L. Syd Johnson, author of a 2022 paper on the ethics of xenotransplantation, for Unexplainable, a Vox podcast that explores unanswered scientific questions. A portion of our conversation, edited for clarity, is included below.


Here are some thoughts:

The article explores the ethical complexities surrounding xenotransplantation, the experimental process of transplanting animal organs, particularly from genetically engineered pigs, into humans. This procedure is viewed as a potential solution to the critical shortage of human donor organs, with over 100,000 people in the U.S. waiting for transplants. However, its experimental nature raises significant challenges, including the body’s rejection of animal organs and the risk of zoonotic disease transmission. Ethical concerns extend to the welfare of the pigs, which are genetically modified and bred solely for organ harvesting. These animals are subjected to invasive procedures and kept in artificial environments, raising questions about the morality of such treatment of sentient beings.

Additionally, the process of obtaining informed consent from patients facing imminent death presents challenges. These patients may not fully grasp the experimental nature of the procedure or the low likelihood of its success, complicating the concept of voluntary and informed decision-making. The environmental impact is another concern, as scaling up xenotransplantation could exacerbate the harms associated with factory farming, such as resource intensiveness and ecological strain. Critics also question whether the significant resources invested in this technology might be better allocated toward preventive healthcare, lab-grown human organs, or therapies aimed at reducing organ failure.

These ethical issues are intertwined with broader questions of equity, sustainability, and opportunity costs. As xenotransplantation progresses, it risks deepening inequalities, as advanced procedures may be accessible only to wealthier individuals. Furthermore, the potential public health risks, such as zoonotic disease transmission, require careful consideration against the procedure’s potential benefits. Ultimately, the discussion calls for a reflective examination of the balance between innovation and ethical responsibility, ensuring that advancements in biotechnology align with principles of justice, compassion, and sustainability.

Sunday, November 24, 2024

World Medical Association Declaration of Helsinki Ethical Principles for Medical Research Involving Human Participants

World Medical Association
JAMA. Published online October 19, 2024.

Preamble

1. The World Medical Association (WMA) has developed the Declaration of Helsinki as a statement of ethical principles for medical research involving human participants, including research using identifiable human material or data.

The Declaration is intended to be read as a whole, and each of its constituent paragraphs should be applied with consideration of all other relevant paragraphs.

2. While the Declaration is adopted by physicians, the WMA holds that these principles should be upheld by all individuals, teams, and organizations involved in medical research, as these principles are fundamental to respect for and protection of all research participants, including both patients and healthy volunteers.


Here are some thoughts:

The World Medical Association's Declaration of Helsinki outlines ethical principles for medical research involving human participants. It emphasizes the primacy of patient well-being, the importance of scientific integrity, and the need to protect participant rights and privacy. Research must be justified by its potential benefits, minimize risks, and involve informed consent. Vulnerable populations require special consideration, and post-trial provisions must be made for participants. Researchers have a duty to publish results, both positive and negative, and to ensure ethical conduct throughout the research process.

Wednesday, October 23, 2024

Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?

Hatherley, J. (2024).
Journal of Medical Ethics, jme-109905.
https://doi.org/10.1136/jme-2024-109905

Abstract

It is commonly accepted that clinicians are ethically obligated to disclose their use of medical machine learning systems to patients, and that failure to do so would amount to a moral fault for which clinicians ought to be held accountable. Call this ‘the disclosure thesis.’ Four main arguments have been, or could be, given to support the disclosure thesis in the ethics literature: the risk-based argument, the rights-based argument, the materiality argument and the autonomy argument. In this article, I argue that each of these four arguments are unconvincing, and therefore, that the disclosure thesis ought to be rejected. I suggest that mandating disclosure may also even risk harming patients by providing stakeholders with a way to avoid accountability for harm that results from improper applications or uses of these systems.


Here are some thoughts:

The ethical obligation for clinicians to disclose their use of medical machine learning (ML) systems—known as the 'disclosure thesis'—is widely accepted in healthcare. However, this presentation challenges the validity of this thesis by critically examining four main arguments that support it: the risk-based, rights-based, materiality, and autonomy arguments. Each of these arguments has significant shortcomings. 

The risk-based argument suggests that disclosure mitigates risks associated with ML systems, but it does not adequately address the complexities of risk management in clinical practice. The rights-based argument posits that patients have a right to know about ML usage, yet this right may not translate into meaningful patient understanding or improved outcomes. Similarly, the materiality argument claims that disclosure is necessary for informed consent, but it risks overwhelming patients with information that might not be actionable. Lastly, the autonomy argument asserts that disclosure enhances patient autonomy; however, it could inadvertently diminish autonomy by creating a false sense of security.

The article concludes that mandating disclosure may lead to unintended consequences, such as reducing accountability for harm resulting from improper ML applications. Clinicians and stakeholders might misuse disclosure as a protective measure against responsibility, thus failing to address the underlying issues. Moving forward, the focus should shift from mere disclosure to establishing robust accountability frameworks that genuinely protect patients and foster meaningful understanding of the technologies involved.

Saturday, April 20, 2024

The Dark Side of AI in Mental Health

Michael DePeau-Wilson
MedPage Today
Originally posted 11 April 24

With the rise in patient-facing psychiatric chatbots powered by artificial intelligence (AI), the potential need for patient mental health data could drive a boom in cash-for-data scams, according to mental health experts.

A recent example of controversial data collection appeared on Craigslist when a company called Therapy For All allegedly posted an advertisement offering money for recording therapy sessions without any additional information about how the recordings would be used.

The company's advertisement and website had already been taken down by the time it was highlighted by a mental health influencer on TikTokopens in a new tab or window. However, archived screenshots of the websiteopens in a new tab or window revealed the company was seeking recorded therapy sessions "to better understand the format, topics, and treatment associated with modern mental healthcare."

Their stated goal was "to ultimately provide mental healthcare to more people at a lower cost," according to the defunct website.

In service of that goal, the company was offering $50 for each recording of a therapy session of at least 45 minutes with clear audio of both the patient and their therapist. The company requested that the patients withhold their names to keep the recordings anonymous.


Here is a summary:

The article highlights several ethical concerns surrounding the use of AI in mental health care:

The lack of patient consent and privacy protections when companies collect sensitive mental health data to train AI models. For example, the nonprofit Koko used OpenAI's GPT-3 to experiment with online mental health support without proper consent protocols.

The issue of companies sharing patient data without authorization, as seen with the Crisis Text Line platform, which led to significant backlash from users.

The clinical risks of relying solely on AI-powered chatbots for mental health therapy, rather than having human clinicians involved. Experts warn this could be "irresponsible and ultimately dangerous" for patients dealing with complex, serious conditions.

The potential for unethical "cash-for-data" schemes, such as the Therapy For All company that sought to obtain recorded therapy sessions without proper consent, in order to train AI models.

Sunday, September 24, 2023

Consent GPT: Is It Ethical to Delegate Procedural Consent to Conversational AI?

Allen, J., Earp, B., Koplin, J. J., & Wilkinson, D.

Abstract

Obtaining informed consent from patients prior to a medical or surgical procedure is a fundamental part of safe and ethical clinical practice. Currently, it is routine for a significant part of the consent process to be delegated to members of the clinical team not performing the procedure (e.g. junior doctors). However, it is common for consent-taking delegates to lack sufficient time and clinical knowledge to adequately promote patient autonomy and informed decision-making. Such problems might be addressed in a number of ways. One possible solution to this clinical dilemma is through the use of conversational artificial intelligence (AI) using large language models (LLMs). There is considerable interest in the potential benefits of such models in medicine. For delegated procedural consent, LLM could improve patients’ access to the relevant procedural information and therefore enhance informed decision-making.

In this paper, we first outline a hypothetical example of delegation of consent to LLMs prior to surgery. We then discuss existing clinical guidelines for consent delegation and some of the ways in which current practice may fail to meet the ethical purposes of informed consent. We outline and discuss the ethical implications of delegating consent to LLMs in medicine concluding that at least in certain clinical situations, the benefits of LLMs potentially far outweigh those of current practices.

-------------

Here are some additional points from the article:
  • The authors argue that the current system of delegating procedural consent to human consent-takers is not always effective, as consent-takers may lack sufficient time or clinical knowledge to adequately promote patient autonomy and informed decision-making.
  • They suggest that LLMs could be used to provide patients with more comprehensive and accurate information about procedures, and to answer patients' questions in a way that is tailored to their individual needs.
  • However, the authors also acknowledge that there are a number of ethical concerns that need to be addressed before LLMs can be used for procedural consent. These include concerns about bias, accuracy, and patient trust.

Thursday, August 24, 2023

The Limits of Informed Consent for an Overwhelmed Patient: Clinicians’ Role in Protecting Patients and Preventing Overwhelm

J. Bester, C.M. Cole, & E. Kodish.
AMA J Ethics. 2016;18(9):869-886.
doi: 10.1001/journalofethics.2016.18.9.peer2-1609.

Abstract

In this paper, we examine the limits of informed consent with particular focus on ways in which various factors can overwhelm decision-making capacity. We introduce overwhelm as a phenomenon commonly experienced by patients in clinical settings and distinguish between emotional overwhelm and informational overload. We argue that in these situations, a clinician’s primary duty is prevention of harm and suggest ways in which clinicians can discharge this obligation. To illustrate our argument, we consider the clinical application of genetic sequencing testing, which involves scientific and technical information that can compromise the understanding and decisional capacity of most patients. Finally, we consider and rebut objections that this could lead to paternalism.

(cut)

Overwhelm and Information Overload

The claim we defend is a simple one: there are medical situations in which the information involved in making a decision is of such a nature that the decision-making capacity of a patient is overwhelmed by the sheer complexity or volume of information at hand. In such cases a patient cannot attain the understanding necessary for informed decision making, and informed consent is therefore not possible. We will support our thesis regarding informational overload by focusing specifically on the area of clinical whole genome sequencing—i.e., identification of an individual’s entire genome, enabling the identification and interaction of multiple genetic variants—as distinct from genetic testing, which tests for specific genetic variants.

We will first present ethical considerations regarding informed consent. Next, we will present three sets of factors that can burden the capacity of a patient to provide informed consent for a specific decision—patient, communication, and information factors—and argue that these factors may in some circumstances make it impossible for a patient to provide informed consent. We will then discuss emotional overwhelm and informational overload and consider how being overwhelmed affects informed consent. Our interest in this essay is mainly in informational overload; we will therefore consider whole genome sequencing as an example in which informational factors overwhelm a patient’s decision-making capacity. Finally, we will offer suggestions as to how the duty to protect patients from harm can be discharged when informed consent is not possible because of emotional overwhelm or informational overload.

(cut)

How should clinicians respond to such situations?

Surrogate decision making. One possible solution to the problem of informed consent when decisional capacity is compromised is to seek a surrogate decision maker. However, in situations of informational overload, this may not solve the problem. If the information has inherent qualities that would overwhelm a reasonable patient, it is likely to also overwhelm a surrogate. Unless the surrogate decision maker is a content expert who also understands the values of the patient, a surrogate decision maker will not solve the problem of informed consent. Surrogate decision making may, however, be useful for the emotionally overwhelmed patient who remains unable to provide informed consent despite additional support.

Shared decision making. Another possible solution is to make use of shared decision making (SDM). This approach relies on deliberation between clinician and patient regarding available health care choices, taking the best evidence into account. The clinician actively involves the patient and elicits patient values. The goal of SDM is often stated as helping patients arrive at informed decisions that respect what matters most to them.

It is not clear, however, that SDM will be successful in facilitating informed decisions when an informed consent process has failed. SDM as a tool for informed decision making is at its core dependent on the patient understanding the options presented and being able to describe the preferred option. Understanding and deliberating about what is at stake for each option is a key component of this use of SDM. Therefore, if the medical information is so complex that it overloads the patient’s decision-making capacity, SDM is unlikely to achieve informed decision making. But if a patient is emotionally overwhelmed by the illness experience and all that accompanies it, a process of SDM and support for the patient may eventually facilitate informed decision making.

Monday, August 7, 2023

Shake-up at top psychiatric institute following suicide in clinical trial

Brendan Borrell
Spectrum News
Originally posted 31 July 23

Here are two excerpts:

The audit and turnover in leadership comes after the halting of a series of clinical trials conducted by Columbia psychiatrist Bret Rutherford, which tested whether the drug levodopa — typically used to treat Parkinson’s disease — could improve mood and mobility in adults with depression.

During a double-blind study that began in 2019, a participant in the placebo group died by suicide. That study was suspended prior to completion, according to an update posted on ClinicalTrials.gov in 2022.

Two published reports based on Rutherford’s pilot studies have since been retracted, as Spectrum has previously reported. The National Institute of Mental Health has terminated Rutherford’s trials and did not renew funding of his research grant or K24 Midcareer Award.

Former members of Rutherford’s laboratory describe it as a high-pressure environment that often put publications ahead of study participants. “Research is important, but not more so than the lives of those who participate in it,” says Kaleigh O’Boyle, who served as clinical research coordinator there from 2018 to 2020.

Although Rutherford’s faculty page is still active, he is no longer listed in the directory at Columbia University, where he was associate professor, and the voicemail at his former number says he is no longer checking it. He did not respond to voicemails and text messages sent to his personal phone or to emails sent to his Columbia email address, and Cantor would not comment on his employment status.

The circumstances around the suicide remain unclear, and the institute has previously declined to comment on Rutherford’s retractions. Veenstra-VanderWeele confirmed that he is the new director but did not respond to further questions about the situation.

(cut)

In January 2022, the study was temporarily suspended by the U.S. National Institute of Mental Health, following the suicide. It is unknown whether that participant had been taking any antidepressant medication prior to the study.

Four of Rutherford’s published studies were subsequently retracted or corrected for issues related to how participants taking antidepressants at enrollment were handled.

One retraction notice published in February indicates tapering could be challenging and that the researchers did not always stick to the protocol. One-third of the participants taking antidepressants were unable to successfully taper off of them.


Note: The article serves as a cautionary tale about the risks of clinical trials. While clinical trials can be a valuable way to test new drugs and treatments, they also carry risks. Participants in clinical trials may be exposed to experimental drugs that have not been fully tested, and they may experience side effects that are not well-understood.  Ethical researchers must follow guidelines and report accurate results.

Sunday, January 29, 2023

UCSF Issues Report, Apologizes for Unethical 1960-70’s Prison Research

Restorative Justice Calls for Continued Examination of the Past

Laura Kurtzman
Press Release
Originally posted 20 DEC 22

Recognizing that justice, healing and transformation require an acknowledgment of past harms, UCSF has created the Program for Historical Reconciliation (PHR). The program is housed under the Office of the Executive Vice Chancellor and Provost, and was started by current Executive Vice Chancellor and Provost, Dan Lowenstein, MD.

The program’s first report, released this month, investigates experiments from the 1960s and 1970s involving incarcerated men at the California Medical Facility (CMF) in Vacaville. Many of these men were being assessed or treated for psychiatric diagnoses.

The research reviewed in the report was performed by Howard Maibach, MD, and William Epstein, MD, both faculty in UCSF’s Department of Dermatology. Epstein was a former chair of the department who died in 2006. The committee was asked to focus on the work of Maibach, who remains an active member of the department.

Some of the experiments exposed research subjects to pesticides and herbicides or administered medications with side effects. In all, some 2,600 incarcerated men were experimented on.

The men volunteered for the studies and were paid for participating. But the report raises ethical concerns over how the research was conducted. In many cases there was no record of informed consent. The subjects also did not have any of the medical conditions that any of the experiments could have potentially treated or ameliorated.

Such practices were common in the U.S. at the time and were increasingly being criticized both by experts and in the lay press. The research continued until 1977, when the state of California halted all human subject research in state prisons, a year after the federal government did the same.

The report acknowledges that Maibach was working during a time when the governance of human subjects research was evolving, both at UCSF and at institutions across the country. Over a six-month period, the committee gathered some 7,000 archival documents, medical journal articles, interviews, documentaries and books, much of which has yet to be analyzed. UCSF has acknowledged that it may issue a follow-up report.

The report found that “Maibach practiced questionable research methods. Archival records and published articles have failed to show any protocols that were adopted regarding informed consent and communicating research risks to participants who were incarcerated.”

In a review of publications between 1960 and 1980, the committee found virtually all of Maibach’s studies lacked documentation of informed consent despite a requirement for formal consent instituted in 1966 by the newly formed Committee on Human Welfare and Experimentation. Only one article, published in 1975, indicated the researchers had obtained informed consent as well as approval from UCSF’s Committee for Human Research (CHR), which began in 1974 as a result of new federal requirements.


Tuesday, November 1, 2022

LinkedIn ran undisclosed social experiments on 20 million users for years to study job success

Kathleen Wong
USAToday.com
Originally posted 25 SEPT 22

A new study analyzing the data of over 20 million LinkedIn users over the timespan of five years reveals that our acquaintances may be more helpful in finding a new job than close friends.

Researchers behind the study say the findings will improve job mobility on the platform, but since users were unaware of their data being studied, some may find the lack of transparency concerning.  

Published this month in Science, the study was conducted by researchers from LinkedIn, Harvard Business School and the Massachusetts Institute of Technology between 2015 and 2019. Researchers ran "multiple large-scale randomized experiments" on the platform's "People You May Know" algorithm, which suggests new connections to users. 

In a practice known as A/B testing, the experiments included giving certain users an algorithm that offered different (like close or not-so-close) contact recommendations and then analyzing the new jobs that came out of those two billion new connections.

(cut)

A question of ethics

Privacy advocates told the New York Times Sunday that some of the 20 million LinkedIn users may not be happy  that their data was used without consent. That resistance is part of a longstanding  pattern of people's data being tracked and used by tech companies without their knowledge.

LinkedIn told the paper it "acted consistently" with its user agreement, privacy policy and member settings.

LinkedIn did not respond to an email sent by USA TODAY on Sunday. 

The paper reports that LinkedIn's privacy policy does state the company reserves the right to use its users' personal data.

That access can be used "to conduct research and development for our Services in order to provide you and others with a better, more intuitive and personalized experience, drive membership growth and engagement on our Services, and help connect professionals to each other and to economic opportunity." 

It can also be deployed to research trends.

The company also said it used "noninvasive" techniques for the study's research. 

Aral told USA TODAY that researchers "received no private or personally identifying data during the study and only made aggregate data available for replication purposes to ensure further privacy safeguards."

Friday, April 29, 2022

Navy Deputizes Psychologists to Enforce Drug Rules Even for Those Seeking Mental Health Help

Konstantin Toropin
MilitaryTimes.com
Originally posted 18 APR 22

In the wake of reports that a Navy psychologist played an active role in convicting for drug use a sailor who had reached out for mental health assistance, the service is standing by its policy, which does not provide patients with confidentiality and could mean that seeking help has consequences for service members.

The case highlights a set of military regulations that, in vaguely defined circumstances, requires doctors to inform commanding officers of certain medical details, including drug tests, even if those tests are conducted for legitimate medical reasons necessary for adequate care. Allowing punishment when service members are looking for help could act as a deterrent in a community where mental health is still a taboo topic among many, despite recent leadership attempts to more openly discuss getting assistance.

On April 11, Military.com reported the story of a sailor and his wife who alleged that the sailor's command, the destroyer USS Farragut, was retaliating against him for seeking mental health help.

Jatzael Alvarado Perez went to a military hospital to get help for his mental health struggles. As part of his treatment, he was given a drug test that came back positive for cannabinoids -- the family of drugs associated with marijuana. Perez denies having used any substances, but the test resulted in a referral to the ship's chief corpsman.

Perez's wife, Carli Alvarado, shared documents with Military.com that were evidence in the sailor's subsequent nonjudicial punishment, showing that the Farragut found out about the results because the psychologist emailed the ship's medical staff directly, according to a copy of the email.

"I'm not sure if you've been tracking, but OS2 Alvarado Perez popped positive for cannabis while inpatient," read the email, written to the ship's medical chief. Navy policy prohibits punishment for a positive drug test when administered as part of regular medical care.

The email goes on to describe efforts by the psychologist to assist in obtaining a second test -- one that could be used to punish Perez.

"We are working to get him a command directed urinalysis through [our command] today," it added.

Saturday, February 26, 2022

Experts Are Ringing Alarms About Elon Musk’s Brain Implants

Noah Kirsch
Daily Beast
Posted 25 Jan 2021

Here is an excerpt:

“These are very niche products—if we’re really only talking about developing them for paralyzed individuals—the market is small, the devices are expensive,” said Dr. L. Syd Johnson, an associate professor in the Center for Bioethics and Humanities at SUNY Upstate Medical University.

“If the ultimate goal is to use the acquired brain data for other devices, or use these devices for other things—say, to drive cars, to drive Teslas—then there might be a much, much bigger market,” she said. “But then all those human research subjects—people with genuine needs—are being exploited and used in risky research for someone else’s commercial gain.”

In interviews with The Daily Beast, a number of scientists and academics expressed cautious hope that Neuralink will responsibly deliver a new therapy for patients, though each also outlined significant moral quandaries that Musk and company have yet to fully address.

Say, for instance, a clinical trial participant changes their mind and wants out of the study, or develops undesirable complications. “What I’ve seen in the field is we’re really good at implanting [the devices],” said Dr. Laura Cabrera, who researches neuroethics at Penn State. “But if something goes wrong, we really don't have the technology to explant them” and remove them safely without inflicting damage to the brain.

There are also concerns about “the rigor of the scrutiny” from the board that will oversee Neuralink’s trials, said Dr. Kreitmair, noting that some institutional review boards “have a track record of being maybe a little mired in conflicts of interest.” She hoped that the high-profile nature of Neuralink’s work will ensure that they have “a lot of their T’s crossed.”

The academics detailed additional unanswered questions: What happens if Neuralink goes bankrupt after patients already have devices in their brains? Who gets to control users’ brain activity data? What happens to that data if the company is sold, particularly to a foreign entity? How long will the implantable devices last, and will Neuralink cover upgrades for the study participants whether or not the trials succeed?

Dr. Johnson, of SUNY Upstate, questioned whether the startup’s scientific capabilities justify its hype. “If Neuralink is claiming that they’ll be able to use their device therapeutically to help disabled persons, they’re overpromising because they’re a long way from being able to do that.”

Neuralink did not respond to a request for comment as of publication time.

Tuesday, November 9, 2021

Louisiana woman learns WWII vet husband’s cadaver dissected at pay-per-view event

Peter Aitken
YahooNews.com
Originally published 7 NOV 21

The family of a deceased Louisiana man found out that his body ended up in a ticketed live human dissection as part of a traveling expo.

David Saunders, a World War II and Korean War veteran who lived in Baker, died at the age of 98 from COVID-19 complications in August. His family donated his remains to science – or so they thought: Instead, his wife, Elsie Saunders, discovered that his body had ended up in an "Oddities and Curiosities Expo" in Oregon.

The expo, organized by DeathScience.org, was set up at the Portland Marriot Downtown Waterfront. People could watch a live human dissection on Oct. 17 for the cost of up to $500 a seat, KING-TV reported.

"From the external body exam to the removal of vital organs including the brain, we will find new perspectives on how the human body can tell a story," an online event description says. "There will be several opportunities for attendees to get an up-close and personal look at the cadaver."

The Seattle-based station sent an undercover reporter to the expo and noted David Saunders’ name on a bracelet he was wearing. The reporter was able to contact Elsie Saunders and let her know what had happened.

She was, understandably, horrified.

"It’s horrible what has happened to my husband," Elsie Saunders told NBC News. "I didn’t know he was going to be … put on display like a performing bear or something. I only consented to body donation or scientific purposes."

"That’s the way my husband wanted it," she explained. "To say the least, I’m upset."

Monday, May 10, 2021

Do Brain Implants Change Your Identity?

Christine Kenneally
The New Yorker
Originally posted 19 Apr 21

Here are two excerpts:

Today, at least two hundred thousand people worldwide, suffering from a wide range of conditions, live with a neural implant of some kind. In recent years, Mark Zuckerberg, Elon Musk, and Bryan Johnson, the founder of the payment-processing company Braintree, all announced neurotechnology projects for restoring or even enhancing human abilities. As we enter this new era of extra-human intelligence, it’s becoming apparent that many people develop an intense relationship with their device, often with profound effects on their sense of identity. These effects, though still little studied, are emerging as crucial to a treatment’s success.

The human brain is a small electrical device of super-galactic complexity. It contains an estimated hundred billion neurons, with many more links between them than there are stars in the Milky Way. Each neuron works by passing an electrical charge along its length, causing neurotransmitters to leap to the next neuron, which ignites in turn, usually in concert with many thousands of others. Somehow, human intelligence emerges from this constant, thrilling choreography. How it happens remains an almost total mystery, but it has become clear that neural technologies will be able to synch with the brain only if they learn the steps of this dance.

(cut)

For the great majority of patients, deep-brain stimulation was beneficial and life-changing, but there were occasional reports of strange behavioral reactions, such as hypomania and hypersexuality. Then, in 2006, a French team published a study about the unexpected consequences of otherwise successful implantations. Two years after a brain implant, sixty-five per cent of patients had a breakdown in their marriages or relationships, and sixty-four per cent wanted to leave their careers. Their intellect and their levels of anxiety and depression were the same as before, or, in the case of anxiety, had even improved, but they seemed to experience a fundamental estrangement from themselves. One felt like an electronic doll. Another said he felt like RoboCop, under remote control.

Gilbert describes himself as “an applied eliminativist.” He doesn’t believe in a soul, or a mind, at least as we normally think of them, and he strongly questions whether there is a thing you could call a self. He suspected that people whose marriages broke down had built their identities and their relationships around their pathologies. When those were removed, the relationships no longer worked. Gilbert began to interview patients. He used standardized questionnaires, a procedure that is methodologically vital for making dependable comparisons, but soon he came to feel that something about this unprecedented human experience was lost when individual stories were left out. The effects he was studying were inextricable from his subjects’ identities, even though those identities changed.

Many people reported that the person they were after treatment was entirely different from the one they’d been when they had only dreamed of relief from their symptoms. Some experienced an uncharacteristic buoyancy and confidence. One woman felt fifteen years younger and tried to lift a pool table, rupturing a disk in her back. One man noticed that his newfound confidence was making life hard for his wife; he was too “full-on.” Another woman became impulsive, walking ten kilometres to a psychologist’s appointment nine days after her surgery. She was unrecognizable to her family. They told her that they grieved for the old her.

Monday, November 23, 2020

Ethical & Legal Considerations of Patients Audio Recording, Videotaping, & Broadcasting Clinical Encounters

Ferguson BD, Angelos P. 
JAMA Surg. 
Published online October 21, 2020. 

Given the increased availability of smartphones and other devices capable of capturing audio and video, it has become increasingly easy for patients to record medical encounters. This behavior can occur overtly, with or without the physician’s express consent, or covertly, without the physician’s knowledge or consent. The following hypothetical cases demonstrate specific scenarios in which physicians have been recorded during patient care.

A patient has come to your clinic seeking a second opinion. She was recently treated for cholangiocarcinoma at another hospital. During her postoperative course, major complications occurred that required a prolonged index admission and several interventional procedures. She is frustrated with the protracted management of her complications. In your review of her records, it becomes evident that her operation may not have been indicated; moreover, it appears that gross disease was left in situ owing to the difficulty of the operation. You eventually recognize that she was never informed of the intraoperative findings and final pathology report. During your conversation, you notice that her husband opens an audio recording app on his phone and places it face up on the desk to document your conversation.

(cut) 

From the Discussion

Each of these cases differs, yet each reflects the general issue of patients recording interactions with their physicians. In the following discussion, we explore a number of ethical and legal considerations raised by such cases and offer suggestions for ways physicians might best navigate these complex situations.

These cases illustrate potentially difficult patient interactions—the first, a delicate conversation involving surgical error; the second, ongoing management of a life-threatening postoperative complication; and the third, a straightforward bedside procedure involving unintended bystanders. When audio or video recording is introduced in clinical encounters, the complexity of these situations can be magnified. It is sometimes challenging to balance a patient’s need to document a physician encounter with the desire for the physician to maintain the patient-physician relationship. Patient autonomy depends on the fidelity with which information is transferred from physician to patient. 

In many cases, patients record encounters to ensure well-informed decision making and therefore to preserve autonomy. In others, patients may have ulterior motives for recording an encounter.