Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Informed Consent. Show all posts
Showing posts with label Informed Consent. Show all posts

Sunday, September 24, 2023

Consent GPT: Is It Ethical to Delegate Procedural Consent to Conversational AI?

Allen, J., Earp, B., Koplin, J. J., & Wilkinson, D.

Abstract

Obtaining informed consent from patients prior to a medical or surgical procedure is a fundamental part of safe and ethical clinical practice. Currently, it is routine for a significant part of the consent process to be delegated to members of the clinical team not performing the procedure (e.g. junior doctors). However, it is common for consent-taking delegates to lack sufficient time and clinical knowledge to adequately promote patient autonomy and informed decision-making. Such problems might be addressed in a number of ways. One possible solution to this clinical dilemma is through the use of conversational artificial intelligence (AI) using large language models (LLMs). There is considerable interest in the potential benefits of such models in medicine. For delegated procedural consent, LLM could improve patients’ access to the relevant procedural information and therefore enhance informed decision-making.

In this paper, we first outline a hypothetical example of delegation of consent to LLMs prior to surgery. We then discuss existing clinical guidelines for consent delegation and some of the ways in which current practice may fail to meet the ethical purposes of informed consent. We outline and discuss the ethical implications of delegating consent to LLMs in medicine concluding that at least in certain clinical situations, the benefits of LLMs potentially far outweigh those of current practices.

-------------

Here are some additional points from the article:
  • The authors argue that the current system of delegating procedural consent to human consent-takers is not always effective, as consent-takers may lack sufficient time or clinical knowledge to adequately promote patient autonomy and informed decision-making.
  • They suggest that LLMs could be used to provide patients with more comprehensive and accurate information about procedures, and to answer patients' questions in a way that is tailored to their individual needs.
  • However, the authors also acknowledge that there are a number of ethical concerns that need to be addressed before LLMs can be used for procedural consent. These include concerns about bias, accuracy, and patient trust.

Thursday, August 24, 2023

The Limits of Informed Consent for an Overwhelmed Patient: Clinicians’ Role in Protecting Patients and Preventing Overwhelm

J. Bester, C.M. Cole, & E. Kodish.
AMA J Ethics. 2016;18(9):869-886.
doi: 10.1001/journalofethics.2016.18.9.peer2-1609.

Abstract

In this paper, we examine the limits of informed consent with particular focus on ways in which various factors can overwhelm decision-making capacity. We introduce overwhelm as a phenomenon commonly experienced by patients in clinical settings and distinguish between emotional overwhelm and informational overload. We argue that in these situations, a clinician’s primary duty is prevention of harm and suggest ways in which clinicians can discharge this obligation. To illustrate our argument, we consider the clinical application of genetic sequencing testing, which involves scientific and technical information that can compromise the understanding and decisional capacity of most patients. Finally, we consider and rebut objections that this could lead to paternalism.

(cut)

Overwhelm and Information Overload

The claim we defend is a simple one: there are medical situations in which the information involved in making a decision is of such a nature that the decision-making capacity of a patient is overwhelmed by the sheer complexity or volume of information at hand. In such cases a patient cannot attain the understanding necessary for informed decision making, and informed consent is therefore not possible. We will support our thesis regarding informational overload by focusing specifically on the area of clinical whole genome sequencing—i.e., identification of an individual’s entire genome, enabling the identification and interaction of multiple genetic variants—as distinct from genetic testing, which tests for specific genetic variants.

We will first present ethical considerations regarding informed consent. Next, we will present three sets of factors that can burden the capacity of a patient to provide informed consent for a specific decision—patient, communication, and information factors—and argue that these factors may in some circumstances make it impossible for a patient to provide informed consent. We will then discuss emotional overwhelm and informational overload and consider how being overwhelmed affects informed consent. Our interest in this essay is mainly in informational overload; we will therefore consider whole genome sequencing as an example in which informational factors overwhelm a patient’s decision-making capacity. Finally, we will offer suggestions as to how the duty to protect patients from harm can be discharged when informed consent is not possible because of emotional overwhelm or informational overload.

(cut)

How should clinicians respond to such situations?

Surrogate decision making. One possible solution to the problem of informed consent when decisional capacity is compromised is to seek a surrogate decision maker. However, in situations of informational overload, this may not solve the problem. If the information has inherent qualities that would overwhelm a reasonable patient, it is likely to also overwhelm a surrogate. Unless the surrogate decision maker is a content expert who also understands the values of the patient, a surrogate decision maker will not solve the problem of informed consent. Surrogate decision making may, however, be useful for the emotionally overwhelmed patient who remains unable to provide informed consent despite additional support.

Shared decision making. Another possible solution is to make use of shared decision making (SDM). This approach relies on deliberation between clinician and patient regarding available health care choices, taking the best evidence into account. The clinician actively involves the patient and elicits patient values. The goal of SDM is often stated as helping patients arrive at informed decisions that respect what matters most to them.

It is not clear, however, that SDM will be successful in facilitating informed decisions when an informed consent process has failed. SDM as a tool for informed decision making is at its core dependent on the patient understanding the options presented and being able to describe the preferred option. Understanding and deliberating about what is at stake for each option is a key component of this use of SDM. Therefore, if the medical information is so complex that it overloads the patient’s decision-making capacity, SDM is unlikely to achieve informed decision making. But if a patient is emotionally overwhelmed by the illness experience and all that accompanies it, a process of SDM and support for the patient may eventually facilitate informed decision making.

Monday, August 7, 2023

Shake-up at top psychiatric institute following suicide in clinical trial

Brendan Borrell
Spectrum News
Originally posted 31 July 23

Here are two excerpts:

The audit and turnover in leadership comes after the halting of a series of clinical trials conducted by Columbia psychiatrist Bret Rutherford, which tested whether the drug levodopa — typically used to treat Parkinson’s disease — could improve mood and mobility in adults with depression.

During a double-blind study that began in 2019, a participant in the placebo group died by suicide. That study was suspended prior to completion, according to an update posted on ClinicalTrials.gov in 2022.

Two published reports based on Rutherford’s pilot studies have since been retracted, as Spectrum has previously reported. The National Institute of Mental Health has terminated Rutherford’s trials and did not renew funding of his research grant or K24 Midcareer Award.

Former members of Rutherford’s laboratory describe it as a high-pressure environment that often put publications ahead of study participants. “Research is important, but not more so than the lives of those who participate in it,” says Kaleigh O’Boyle, who served as clinical research coordinator there from 2018 to 2020.

Although Rutherford’s faculty page is still active, he is no longer listed in the directory at Columbia University, where he was associate professor, and the voicemail at his former number says he is no longer checking it. He did not respond to voicemails and text messages sent to his personal phone or to emails sent to his Columbia email address, and Cantor would not comment on his employment status.

The circumstances around the suicide remain unclear, and the institute has previously declined to comment on Rutherford’s retractions. Veenstra-VanderWeele confirmed that he is the new director but did not respond to further questions about the situation.

(cut)

In January 2022, the study was temporarily suspended by the U.S. National Institute of Mental Health, following the suicide. It is unknown whether that participant had been taking any antidepressant medication prior to the study.

Four of Rutherford’s published studies were subsequently retracted or corrected for issues related to how participants taking antidepressants at enrollment were handled.

One retraction notice published in February indicates tapering could be challenging and that the researchers did not always stick to the protocol. One-third of the participants taking antidepressants were unable to successfully taper off of them.


Note: The article serves as a cautionary tale about the risks of clinical trials. While clinical trials can be a valuable way to test new drugs and treatments, they also carry risks. Participants in clinical trials may be exposed to experimental drugs that have not been fully tested, and they may experience side effects that are not well-understood.  Ethical researchers must follow guidelines and report accurate results.

Sunday, January 29, 2023

UCSF Issues Report, Apologizes for Unethical 1960-70’s Prison Research

Restorative Justice Calls for Continued Examination of the Past

Laura Kurtzman
Press Release
Originally posted 20 DEC 22

Recognizing that justice, healing and transformation require an acknowledgment of past harms, UCSF has created the Program for Historical Reconciliation (PHR). The program is housed under the Office of the Executive Vice Chancellor and Provost, and was started by current Executive Vice Chancellor and Provost, Dan Lowenstein, MD.

The program’s first report, released this month, investigates experiments from the 1960s and 1970s involving incarcerated men at the California Medical Facility (CMF) in Vacaville. Many of these men were being assessed or treated for psychiatric diagnoses.

The research reviewed in the report was performed by Howard Maibach, MD, and William Epstein, MD, both faculty in UCSF’s Department of Dermatology. Epstein was a former chair of the department who died in 2006. The committee was asked to focus on the work of Maibach, who remains an active member of the department.

Some of the experiments exposed research subjects to pesticides and herbicides or administered medications with side effects. In all, some 2,600 incarcerated men were experimented on.

The men volunteered for the studies and were paid for participating. But the report raises ethical concerns over how the research was conducted. In many cases there was no record of informed consent. The subjects also did not have any of the medical conditions that any of the experiments could have potentially treated or ameliorated.

Such practices were common in the U.S. at the time and were increasingly being criticized both by experts and in the lay press. The research continued until 1977, when the state of California halted all human subject research in state prisons, a year after the federal government did the same.

The report acknowledges that Maibach was working during a time when the governance of human subjects research was evolving, both at UCSF and at institutions across the country. Over a six-month period, the committee gathered some 7,000 archival documents, medical journal articles, interviews, documentaries and books, much of which has yet to be analyzed. UCSF has acknowledged that it may issue a follow-up report.

The report found that “Maibach practiced questionable research methods. Archival records and published articles have failed to show any protocols that were adopted regarding informed consent and communicating research risks to participants who were incarcerated.”

In a review of publications between 1960 and 1980, the committee found virtually all of Maibach’s studies lacked documentation of informed consent despite a requirement for formal consent instituted in 1966 by the newly formed Committee on Human Welfare and Experimentation. Only one article, published in 1975, indicated the researchers had obtained informed consent as well as approval from UCSF’s Committee for Human Research (CHR), which began in 1974 as a result of new federal requirements.


Tuesday, November 1, 2022

LinkedIn ran undisclosed social experiments on 20 million users for years to study job success

Kathleen Wong
USAToday.com
Originally posted 25 SEPT 22

A new study analyzing the data of over 20 million LinkedIn users over the timespan of five years reveals that our acquaintances may be more helpful in finding a new job than close friends.

Researchers behind the study say the findings will improve job mobility on the platform, but since users were unaware of their data being studied, some may find the lack of transparency concerning.  

Published this month in Science, the study was conducted by researchers from LinkedIn, Harvard Business School and the Massachusetts Institute of Technology between 2015 and 2019. Researchers ran "multiple large-scale randomized experiments" on the platform's "People You May Know" algorithm, which suggests new connections to users. 

In a practice known as A/B testing, the experiments included giving certain users an algorithm that offered different (like close or not-so-close) contact recommendations and then analyzing the new jobs that came out of those two billion new connections.

(cut)

A question of ethics

Privacy advocates told the New York Times Sunday that some of the 20 million LinkedIn users may not be happy  that their data was used without consent. That resistance is part of a longstanding  pattern of people's data being tracked and used by tech companies without their knowledge.

LinkedIn told the paper it "acted consistently" with its user agreement, privacy policy and member settings.

LinkedIn did not respond to an email sent by USA TODAY on Sunday. 

The paper reports that LinkedIn's privacy policy does state the company reserves the right to use its users' personal data.

That access can be used "to conduct research and development for our Services in order to provide you and others with a better, more intuitive and personalized experience, drive membership growth and engagement on our Services, and help connect professionals to each other and to economic opportunity." 

It can also be deployed to research trends.

The company also said it used "noninvasive" techniques for the study's research. 

Aral told USA TODAY that researchers "received no private or personally identifying data during the study and only made aggregate data available for replication purposes to ensure further privacy safeguards."

Friday, April 29, 2022

Navy Deputizes Psychologists to Enforce Drug Rules Even for Those Seeking Mental Health Help

Konstantin Toropin
MilitaryTimes.com
Originally posted 18 APR 22

In the wake of reports that a Navy psychologist played an active role in convicting for drug use a sailor who had reached out for mental health assistance, the service is standing by its policy, which does not provide patients with confidentiality and could mean that seeking help has consequences for service members.

The case highlights a set of military regulations that, in vaguely defined circumstances, requires doctors to inform commanding officers of certain medical details, including drug tests, even if those tests are conducted for legitimate medical reasons necessary for adequate care. Allowing punishment when service members are looking for help could act as a deterrent in a community where mental health is still a taboo topic among many, despite recent leadership attempts to more openly discuss getting assistance.

On April 11, Military.com reported the story of a sailor and his wife who alleged that the sailor's command, the destroyer USS Farragut, was retaliating against him for seeking mental health help.

Jatzael Alvarado Perez went to a military hospital to get help for his mental health struggles. As part of his treatment, he was given a drug test that came back positive for cannabinoids -- the family of drugs associated with marijuana. Perez denies having used any substances, but the test resulted in a referral to the ship's chief corpsman.

Perez's wife, Carli Alvarado, shared documents with Military.com that were evidence in the sailor's subsequent nonjudicial punishment, showing that the Farragut found out about the results because the psychologist emailed the ship's medical staff directly, according to a copy of the email.

"I'm not sure if you've been tracking, but OS2 Alvarado Perez popped positive for cannabis while inpatient," read the email, written to the ship's medical chief. Navy policy prohibits punishment for a positive drug test when administered as part of regular medical care.

The email goes on to describe efforts by the psychologist to assist in obtaining a second test -- one that could be used to punish Perez.

"We are working to get him a command directed urinalysis through [our command] today," it added.

Saturday, February 26, 2022

Experts Are Ringing Alarms About Elon Musk’s Brain Implants

Noah Kirsch
Daily Beast
Posted 25 Jan 2021

Here is an excerpt:

“These are very niche products—if we’re really only talking about developing them for paralyzed individuals—the market is small, the devices are expensive,” said Dr. L. Syd Johnson, an associate professor in the Center for Bioethics and Humanities at SUNY Upstate Medical University.

“If the ultimate goal is to use the acquired brain data for other devices, or use these devices for other things—say, to drive cars, to drive Teslas—then there might be a much, much bigger market,” she said. “But then all those human research subjects—people with genuine needs—are being exploited and used in risky research for someone else’s commercial gain.”

In interviews with The Daily Beast, a number of scientists and academics expressed cautious hope that Neuralink will responsibly deliver a new therapy for patients, though each also outlined significant moral quandaries that Musk and company have yet to fully address.

Say, for instance, a clinical trial participant changes their mind and wants out of the study, or develops undesirable complications. “What I’ve seen in the field is we’re really good at implanting [the devices],” said Dr. Laura Cabrera, who researches neuroethics at Penn State. “But if something goes wrong, we really don't have the technology to explant them” and remove them safely without inflicting damage to the brain.

There are also concerns about “the rigor of the scrutiny” from the board that will oversee Neuralink’s trials, said Dr. Kreitmair, noting that some institutional review boards “have a track record of being maybe a little mired in conflicts of interest.” She hoped that the high-profile nature of Neuralink’s work will ensure that they have “a lot of their T’s crossed.”

The academics detailed additional unanswered questions: What happens if Neuralink goes bankrupt after patients already have devices in their brains? Who gets to control users’ brain activity data? What happens to that data if the company is sold, particularly to a foreign entity? How long will the implantable devices last, and will Neuralink cover upgrades for the study participants whether or not the trials succeed?

Dr. Johnson, of SUNY Upstate, questioned whether the startup’s scientific capabilities justify its hype. “If Neuralink is claiming that they’ll be able to use their device therapeutically to help disabled persons, they’re overpromising because they’re a long way from being able to do that.”

Neuralink did not respond to a request for comment as of publication time.

Tuesday, November 9, 2021

Louisiana woman learns WWII vet husband’s cadaver dissected at pay-per-view event

Peter Aitken
YahooNews.com
Originally published 7 NOV 21

The family of a deceased Louisiana man found out that his body ended up in a ticketed live human dissection as part of a traveling expo.

David Saunders, a World War II and Korean War veteran who lived in Baker, died at the age of 98 from COVID-19 complications in August. His family donated his remains to science – or so they thought: Instead, his wife, Elsie Saunders, discovered that his body had ended up in an "Oddities and Curiosities Expo" in Oregon.

The expo, organized by DeathScience.org, was set up at the Portland Marriot Downtown Waterfront. People could watch a live human dissection on Oct. 17 for the cost of up to $500 a seat, KING-TV reported.

"From the external body exam to the removal of vital organs including the brain, we will find new perspectives on how the human body can tell a story," an online event description says. "There will be several opportunities for attendees to get an up-close and personal look at the cadaver."

The Seattle-based station sent an undercover reporter to the expo and noted David Saunders’ name on a bracelet he was wearing. The reporter was able to contact Elsie Saunders and let her know what had happened.

She was, understandably, horrified.

"It’s horrible what has happened to my husband," Elsie Saunders told NBC News. "I didn’t know he was going to be … put on display like a performing bear or something. I only consented to body donation or scientific purposes."

"That’s the way my husband wanted it," she explained. "To say the least, I’m upset."

Monday, May 10, 2021

Do Brain Implants Change Your Identity?

Christine Kenneally
The New Yorker
Originally posted 19 Apr 21

Here are two excerpts:

Today, at least two hundred thousand people worldwide, suffering from a wide range of conditions, live with a neural implant of some kind. In recent years, Mark Zuckerberg, Elon Musk, and Bryan Johnson, the founder of the payment-processing company Braintree, all announced neurotechnology projects for restoring or even enhancing human abilities. As we enter this new era of extra-human intelligence, it’s becoming apparent that many people develop an intense relationship with their device, often with profound effects on their sense of identity. These effects, though still little studied, are emerging as crucial to a treatment’s success.

The human brain is a small electrical device of super-galactic complexity. It contains an estimated hundred billion neurons, with many more links between them than there are stars in the Milky Way. Each neuron works by passing an electrical charge along its length, causing neurotransmitters to leap to the next neuron, which ignites in turn, usually in concert with many thousands of others. Somehow, human intelligence emerges from this constant, thrilling choreography. How it happens remains an almost total mystery, but it has become clear that neural technologies will be able to synch with the brain only if they learn the steps of this dance.

(cut)

For the great majority of patients, deep-brain stimulation was beneficial and life-changing, but there were occasional reports of strange behavioral reactions, such as hypomania and hypersexuality. Then, in 2006, a French team published a study about the unexpected consequences of otherwise successful implantations. Two years after a brain implant, sixty-five per cent of patients had a breakdown in their marriages or relationships, and sixty-four per cent wanted to leave their careers. Their intellect and their levels of anxiety and depression were the same as before, or, in the case of anxiety, had even improved, but they seemed to experience a fundamental estrangement from themselves. One felt like an electronic doll. Another said he felt like RoboCop, under remote control.

Gilbert describes himself as “an applied eliminativist.” He doesn’t believe in a soul, or a mind, at least as we normally think of them, and he strongly questions whether there is a thing you could call a self. He suspected that people whose marriages broke down had built their identities and their relationships around their pathologies. When those were removed, the relationships no longer worked. Gilbert began to interview patients. He used standardized questionnaires, a procedure that is methodologically vital for making dependable comparisons, but soon he came to feel that something about this unprecedented human experience was lost when individual stories were left out. The effects he was studying were inextricable from his subjects’ identities, even though those identities changed.

Many people reported that the person they were after treatment was entirely different from the one they’d been when they had only dreamed of relief from their symptoms. Some experienced an uncharacteristic buoyancy and confidence. One woman felt fifteen years younger and tried to lift a pool table, rupturing a disk in her back. One man noticed that his newfound confidence was making life hard for his wife; he was too “full-on.” Another woman became impulsive, walking ten kilometres to a psychologist’s appointment nine days after her surgery. She was unrecognizable to her family. They told her that they grieved for the old her.

Monday, November 23, 2020

Ethical & Legal Considerations of Patients Audio Recording, Videotaping, & Broadcasting Clinical Encounters

Ferguson BD, Angelos P. 
JAMA Surg. 
Published online October 21, 2020. 

Given the increased availability of smartphones and other devices capable of capturing audio and video, it has become increasingly easy for patients to record medical encounters. This behavior can occur overtly, with or without the physician’s express consent, or covertly, without the physician’s knowledge or consent. The following hypothetical cases demonstrate specific scenarios in which physicians have been recorded during patient care.

A patient has come to your clinic seeking a second opinion. She was recently treated for cholangiocarcinoma at another hospital. During her postoperative course, major complications occurred that required a prolonged index admission and several interventional procedures. She is frustrated with the protracted management of her complications. In your review of her records, it becomes evident that her operation may not have been indicated; moreover, it appears that gross disease was left in situ owing to the difficulty of the operation. You eventually recognize that she was never informed of the intraoperative findings and final pathology report. During your conversation, you notice that her husband opens an audio recording app on his phone and places it face up on the desk to document your conversation.

(cut) 

From the Discussion

Each of these cases differs, yet each reflects the general issue of patients recording interactions with their physicians. In the following discussion, we explore a number of ethical and legal considerations raised by such cases and offer suggestions for ways physicians might best navigate these complex situations.

These cases illustrate potentially difficult patient interactions—the first, a delicate conversation involving surgical error; the second, ongoing management of a life-threatening postoperative complication; and the third, a straightforward bedside procedure involving unintended bystanders. When audio or video recording is introduced in clinical encounters, the complexity of these situations can be magnified. It is sometimes challenging to balance a patient’s need to document a physician encounter with the desire for the physician to maintain the patient-physician relationship. Patient autonomy depends on the fidelity with which information is transferred from physician to patient. 

In many cases, patients record encounters to ensure well-informed decision making and therefore to preserve autonomy. In others, patients may have ulterior motives for recording an encounter.

Saturday, September 12, 2020

Psychotherapy, placebos, and informed consent

Leder G
Journal of Medical Ethics 
Published Online First: 20 August 2020.
doi: 10.1136/medethics-2020-106453

Abstract

Several authors have recently argued that psychotherapy, as it is commonly practiced, is deceptive and undermines patients’ ability to give informed consent to treatment. This ‘deception’ claim is based on the findings that some, and possibly most, of the ameliorative effects in psychotherapeutic interventions are mediated by therapeutic common factors shared by successful treatments (eg, expectancy effects and therapist effects), rather than because of theory-specific techniques. These findings have led to claims that psychotherapy is, at least partly, likely a placebo, and that practitioners of psychotherapy have a duty to ‘go open’ to patients about the role of common factors in therapy (even if this risks negatively affecting the efficacy of treatment); to not ‘go open’ is supposed to unjustly restrict patients’ autonomy. This paper makes two related arguments against the ‘go open’ claim. (1) While therapies ought to provide patients with sufficient information to make informed treatment decisions, informed consent does not require that practitioners ‘go open’ about therapeutic common factors in psychotherapy, and (2) clarity about the mechanisms of change in psychotherapy shows us that the common-factors findings are consistent with, rather than undermining of, the truth of many theory-specific forms of psychotherapy; psychotherapy, as it is commonly practiced, is not deceptive and is not a placebo. The call to ‘go open’ should be resisted and may have serious detrimental effects on patients via the dissemination of a false view about how therapy works.

Conclusion

The ‘go open’ argument is based on a mistaken view about the mechanisms of change in psychotherapy and threatens to harm patients by undermining their ability to make informed treatment decisions. This paper has argued that the prima facie ethical problem raised by the ‘go open’ argument is diffused if we clear up a conceptual confusion about what, exactly, we should be
going open about. Therapists should be open with patients about the differing theories of the mechanisms of change in psychotherapy; this can, but need not involve discussing information
about the therapeutic common factors.

The article is here.

Note from Dr. Gavazzi: Using "deception" is the wrong frame for this issue.  How complete is your informed consent?  Can we ever give "perfect" informed consent?  The answer is likely no.

Wednesday, May 20, 2020

Ethics of controlled human infection to study COVID-19

Shah, S.K, Miller, F.G., and others
Science  07 May 2020
DOI: 10.1126/science.abc1076

Abstract

Development of an effective vaccine is the clearest path to controlling the coronavirus disease 2019 (COVID-19) pandemic. To accelerate vaccine development, some researchers are pursuing, and thousands of people have expressed interest in participating in, controlled human infection studies (CHIs) with severe acute respiratory syndrome–coronavirus 2 (SARS-CoV-2) (1, 2). In CHIs, a small number of participants are deliberately exposed to a pathogen to study infection and gather preliminary efficacy data on experimental vaccines or treatments. We have been developing a comprehensive, state-of-the-art ethical framework for CHIs that emphasizes their social value as fundamental to justifying these studies. The ethics of CHIs in general are underexplored (3, 4), and ethical examinations of SARS-CoV-2 CHIs have largely focused on whether the risks are acceptable and participants could give valid informed consent (1). The high social value of such CHIs has generally been assumed. Based on our framework, we agree on the ethical conditions for conducting SARS-CoV-2 CHIs (see the table). We differ on whether the social value of such CHIs is sufficient to justify the risks at present, given uncertainty about both in a rapidly evolving situation; yet we see none of our disagreements as insurmountable. We provide ethical guidance for research sponsors, communities, participants, and the essential independent reviewers considering SARS-CoV-2 CHIs.

The info is here.

Monday, April 27, 2020

Experiments on Trial

Hannah Fry
The New Yorker
Originally posted 24 Feb 20

Here are two excerpts:

There are also times when manipulation leaves people feeling cheated. For instance, in 2018 the Wall Street Journal reported that Amazon had been inserting sponsored products in its consumers’ baby registries. “The ads look identical to the rest of the listed products in the registry, except for a small gray ‘Sponsored’ tag,” the Journal revealed. “Unsuspecting friends and family clicked on the ads and purchased the items,” assuming they’d been chosen by the expectant parents. Amazon’s explanation when confronted? “We’re constantly experimenting,” a spokesperson said. (The company has since ended the practice.)

But there are times when the experiments go further still, leaving some to question whether they should be allowed at all. There was a notorious experiment run by Facebook in 2012, in which the number of positive and negative posts in six hundred and eighty-nine thousand users’ news feeds was tweaked. The aim was to see how the unwitting participants would react. As it turned out, those who saw less negative content in their feeds went on to post more positive stuff themselves, while those who had positive posts hidden from their feeds used more negative words.

A public backlash followed; people were upset to discover that their emotions had been manipulated. Luca and Bazerman argue that this response was largely misguided. They point out that the effect was small. A person exposed to the negative news feed “ended up writing about four additional negative words out of every 10,000,” they note. Besides, they say, “advertisers and other groups manipulate consumers’ emotions all the time to suit their purposes. If you’ve ever read a Hallmark card, attended a football game or seen a commercial for the ASPCA, you’ve been exposed to the myriad ways in which products and services influence consumers’ emotions.”

(cut)

Medicine has already been through this. In the early twentieth century, without a set of ground rules on how people should be studied, medical experimentation was like the Wild West. Alongside a great deal of good work, a number of deeply unethical studies took place—including the horrifying experiments conducted by the Nazis and the appalling Tuskegee syphilis trial, in which hundreds of African-American men were denied treatment by scientists who wanted to see how the lethal disease developed. As a result, there are now clear rules about seeking informed consent whenever medical experiments use human subjects, and institutional procedures for reviewing the design of such experiments in advance. We’ve learned that researchers aren’t always best placed to assess the potential harm of their work.

The info is here.

Tuesday, March 17, 2020

Some Researchers Wear Yellow Pants, but Even Fewer Participants Read Consent Forms

B, Douglas, E. McGorray, & P. Ewell
PsyArXiv
Originally published 5 Feb 20

Abstract

Though consent forms include important information, those experienced with behavioral research often observe that participants do not carefully read consent forms. Three studies examined participants’ reading of consent forms for in-person experiments. In each study, we inserted the phrase “some researchers wear yellow pants” into sections of the consent form and measured participants’ reading of the form by testing their recall of the color yellow. In Study 1, we found that the majority of participants did not read consent forms thoroughly. This suggests that overall, participants sign consent forms that they have not read, confirming what has been observed anecdotally and documented in other research domains. Study 2 examined which sections of consent forms participants read and found that participants were more likely to read the first two sections of a consent form (procedure and risks) than later sections (benefits and anonymity and confidentiality). Given that rates of recall of the target phrase were under 70% even when the sentence was inserted into earlier sections of the form, we explored ways to improve participant reading in Study 3. Theorizing that the presence of a researcher may influence participants’ retention of the form, we assigned participants to read the form with or without a researcher present. Results indicated that removing the researcher from the room while participants read the consent form decreased recall of the target phrase. Implications of these results and suggestions for future researchers are discussed.

The research is here.

Monday, February 10, 2020

The medications that change who we are

Zaria Gorvett
BBC.com
Originally published 8 Jan 20

Here are two excerpts:

According to Golomb, this is typical – in her experience, most patients struggle to recognise their own behavioural changes, let alone connect them to their medication. In some instances, the realisation comes too late: the researcher was contacted by the families of a number of people, including an internationally renowned scientist and a former editor of a legal publication, who took their own lives.

We’re all familiar with the mind-bending properties of psychedelic drugs – but it turns out ordinary medications can be just as potent. From paracetamol (known as acetaminophen in the US) to antihistamines, statins, asthma medications and antidepressants, there’s emerging evidence that they can make us impulsive, angry, or restless, diminish our empathy for strangers, and even manipulate fundamental aspects of our personalities, such as how neurotic we are.

(cut)

Research into these effects couldn’t come at a better time. The world is in the midst of a crisis of over-medication, with the US alone buying up 49,000 tonnes of paracetamol every year – equivalent to about 298 paracetamol tablets per person – and the average American consuming $1,200 worth of prescription medications over the same period. And as the global population ages, our drug-lust is set to spiral even further out of control; in the UK, one in 10 people over the age of 65 already takes eight medications every week.

How are all these medications affecting our brains? And should there be warnings on packets?

The info is here.

Tuesday, January 21, 2020

10 Years Ago, DNA Tests Were The Future Of Medicine. Now They’re A Social Network — And A Data Privacy Mess

Peter Aldhaus
buzzfeednews.com
Originally posted 11 Dec 19

Here is an excerpt:

But DNA testing can reveal uncomfortable truths, too. Families have been torn apart by the discovery that the man they call “Dad” is not the biological father of his children. Home DNA tests can also be used to show that a relative is a rapist or a killer.

That possibility burst into the public consciousness in April 2018, with the arrest of Joseph James DeAngelo, alleged to be the Golden State Killer responsible for at least 13 killings and more than 50 rapes in the 1970s and 1980s. DeAngelo was finally tracked down after DNA left at the scene of a 1980 double murder was matched to people in GEDmatch who were the killer's third or fourth cousins. Through months of painstaking work, investigators working with the genealogist Barbara Rae-Venter built family trees that converged on DeAngelo.

Genealogists had long realized that databases like GEDmatch could be used in this way, but had been wary of working with law enforcement — fearing that DNA test customers would object to the idea of cops searching their DNA profiles and rummaging around in their family trees.

But the Golden State Killer’s crimes were so heinous that the anticipated backlash initially failed to materialize. Indeed, a May 2018 survey of more than 1,500 US adults found that 80% backed police using public genealogy databases to solve violent crimes.

“I was very surprised with the Golden State Killer case how positive the reaction was across the board,” CeCe Moore, a genealogist known for her appearances on TV, told BuzzFeed News a couple of months after DeAngelo’s arrest.

The info is here.

Monday, December 30, 2019

23 and Baby

Tanya Lewis
nature.com
Originally posted 4 Dec 19

Here are two excerpts:

Proponents say that genetic testing of newborns can help diagnose a life-threatening childhood-onset disease in urgent cases and could dramatically increase the number of genetic conditions all babies are screened for at birth, enabling earlier diagnosis and treatment. It could also inform parents of conditions they could pass on to future children or of their own risk of adult-onset diseases. Genetic testing could detect hundreds or even thousands of diseases, an order of magnitude more than current heel-stick blood tests—which all babies born in the U.S. undergo at birth—or confirm results from such a test.

But others caution that genetic tests may do more harm than good. They could miss some diseases that heel-stick testing can detect and produce false positives for others, causing anxiety and leading to unnecessary follow-up testing. Sequencing children’s DNA also raises issues of consent and the prospect of genetic discrimination.

Regardless of these concerns, newborn genetic testing is already here, and it is likely to become only more common. But is the technology sophisticated enough to be truly useful for most babies? And are families—and society—ready for that information?

(cut)

Then there’s the issue of privacy. If the child’s genetic information is stored on file, who has access to it? If the information becomes public, it could lead to discrimination by employers or insurance companies. The Genetic Information Nondiscrimination Act (GINA), passed in 2008, prohibits such discrimination. But GINA does not apply to employers with fewer than 15 employees and does not cover insurance for long-term care, life or disability. It also does not apply to people employed and insured by the military’s Tricare system, such as Rylan Gorby. When his son’s genome was sequenced, researchers also obtained permission to sequence Rylan’s genome, to determine if he was a carrier for the rare hemoglobin condition. Because it manifests itself only in childhood, Gorby decided taking the test was worth the risk of possible discrimination.

The info is here.

Friday, November 29, 2019

This Researcher Exploited Prisoners, Children, and the Elderly. Why Does Penn Honor Him?

Image result for albert kligman
Albert Kligman
Alexander Kafka
The Chronicle of Higher Education
Originally published Nov 8, 2019

Here is an excerpt:

What the university sites don’t mention is how Retin-A and Renova, an anti-wrinkle variation of the retinoic acid compound, were derived from substances first experimentally applied by Kligman’s research team to the skin of inmates at Holmesburg Prison, then a large facility in Philadelphia.

From the 1950s into the 1970s, the prison served as Kligman’s “Kmart of human experimentation,” in the words of Allen M. Hornblum, an author who exhaustively documented the Penn researcher’s projects at Holmesburg in his books Acres of Skin (1998) and Sentenced to Science: One Black Man’s Story of Imprisonment in America (2007).

Colleges are questioning the morality of accepting research funds from Jeffrey Epstein, who was accused of sexually molesting young girls, and the Sacklers, makers of OxyContin.

They are searching their souls over institutional ties to slavery and Jim Crow-era exploitation.

Hornblum and others have asked for decades whether Penn should be honoring Kligman, and Hornblum and Yusef Anthony, the former inmate whose story Hornblum tells in Sentenced to Science, will ask again in a lecture at Princeton next month. The current ethical climate amplifies their question.

The university’s president, Amy Gutmann, and a Penn colleague, the bioethicist Jonathan D. Moreno, recently published a book on bioethics and health care. “They are advising the world on all of these different issues,” Hornblum says, “but they don’t know what’s going on on their own campus? They don’t know it’s wrong?”

Penn says it “regrets the manner in which this research was conducted” and emphasizes the university’s commitment to research ethics. But it has given no indication that it plans to take any action regarding the lectureship or the university’s portrayal of Kligman.

Kligman, who died in 2010, defended his work by saying that experiments on prisoners were common at the time, and he was right. But, Hornblum says, the scale and duration of the Holmesburg experiments stood out even then.

The info is here.

Tuesday, October 22, 2019

AI used for first time in job interviews in UK to find best applicants

Charles Hymas
The Telegraph
Originally posted September 27, 2019

Artificial intelligence (AI) and facial expression technology is being used for the first time in job interviews in the UK to identify the best candidates.

Unilever, the consumer goods giant, is among companies using AI technology to analyse the language, tone and facial expressions of candidates when they are asked a set of identical job questions which they film on their mobile phone or laptop.

The algorithms select the best applicants by assessing their performances in the videos against about 25,000 pieces of facial and linguistic information compiled from previous interviews of those who have gone on to prove to be good at the job.

Hirevue, the US company which has developed the interview technology, claims it enables hiring firms to interview more candidates in the initial stage rather than simply relying on CVs and that it provides a more reliable and objective indicator of future performance free of human bias.

However, academics and campaigners warned that any AI or facial recognition technology would inevitably have in-built biases in its databases that could discriminate against some candidates and exclude talented applicants who might not conform to the norm.

The info is here.

Friday, October 18, 2019

Code of Ethics Can Guide Responsible Data Use

Katherine Noyes
The Wall Street Journal
Originally posted September 26, 2019

Here is an excerpt:

Associated with these exploding data volumes are plenty of practical challenges to overcome—storage, networking, and security, to name just a few—but far less straightforward are the serious ethical concerns. Data may promise untold opportunity to solve some of the largest problems facing humanity today, but it also has the potential to cause great harm due to human negligence, naivety, and deliberate malfeasance, Patil pointed out. From data breaches to accidents caused by self-driving vehicles to algorithms that incorporate racial biases, “we must expect to see the harm from data increase.”

Health care data may be particularly fraught with challenges. “MRIs and countless other data elements are all digitized, and that data is fragmented across thousands of databases with no easy way to bring it together,” Patil said. “That prevents patients’ access to data and research.” Meanwhile, even as clinical trials and numerous other sources continually supplement data volumes, women and minorities remain chronically underrepresented in many such studies. “We have to reboot and rebuild this whole system,” Patil said.

What the world of technology and data science needs is a code of ethics—a set of principles, akin to the Hippocratic Oath, that guides practitioners’ uses of data going forward, Patil suggested. “Data is a force multiplier that offers tremendous opportunity for change and transformation,” he explained, “but if we don’t do it right, the implications will be far worse than we can appreciate, in all sorts of ways.”

The info is here.