Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label institutional review boards. Show all posts
Showing posts with label institutional review boards. Show all posts

Tuesday, April 29, 2014

This will hurt a bit

By David Hunter
BMJ Group Blogs
Originally posted April 11, 2014

Here is an excerpt:

In this piece she describes the case of a Cornell graduate student who carried out a piece of self-experimentation without IRB approval (based on the mistaken belief it wasn’t required) which aimed to assess which part of the body was worst to be stung by a bee on and involved:  ”five stings a day, always between 9 and 10am, and always starting and ending with “test stings” on his forearm to calibrate the ratings. He kept this up for 38 days, stinging himself three times each on 25 different body parts.”

The entire blog is here.

Monday, June 3, 2013

Experts propose overhaul of ethics oversight of research

The Hastings Center
Press Release
Originally released January 2013

Hastings Center Special Report aims to 'provoke a national conversation'

The longstanding ethical framework for protecting human volunteers in medical research needs to be replaced because it is outdated and can impede efforts to improve health care quality, assert leaders in bioethics, medicine, and health policy in two companion articles in a Hastings Center Report special report, "Ethical Oversight of Learning Health Care Systems." One of the authors calling for a new approach is the main architect of the current ethical framework.

Seven commentaries in the publication, written by leaders with national responsibility for ethical oversight of medical research and efforts to improve health care quality, find areas of agreement and offer critiques.

In an accompanying editorial, co-guest editors Mildred Z. Solomon, President of The Hastings Center and Ann C. Bonham, Chief Scientific Officer at the American Association of Medical Colleges, wrote that by inviting these commentaries, they aimed to "provoke a national conversation." According to Solomon, "The challenge is to design oversight that adequately protects patients without impeding the kinds of data collection activities we need to improve health care quality, reduce disparities, and bring down our rate of medical errors." (See video of Dr. Solomon on the importance of this debate.)

For nearly four decades, protection of human participants in medical research has been based on the premise that there is a clear line between medical research and medical treatment. But, the two feature articles argue, that distinction has become blurred now that health care systems across the country are beginning to collect data from patients when they come in for treatment or follow-up. The Institute of Medicine has recommended that health care organizations do this kind of research, calling on them to become "learning health care systems."

In particular, the articles challenge the prevailing view that participating in medical research is inherently riskier and provides less benefit than receiving medical care. They point out that more than half of medical treatments lack evidence of effectiveness, putting patients at risk of harm. On the other hand, some kinds of clinical research are no riskier than clinical care and are potentially more beneficial; an example is comparative effectiveness research to find out which of two or more widely used interventions for a particular condition works best for which patients.

"Relying on this faulty research-practice distinction as the criterion that triggers ethical oversight has resulted in two major problems," the authors write. First, it has led to "delays, confusion, and frustrations in the regulatory environment" when institutional review boards, which are responsible for the ethical oversight of research with human subjects, have difficulty distinguishing between research and clinical practice. Second, it has "resulted in a morally questionable public policy in which many patients are either underprotected from clinical practice risks (when exposed to interventions of unproven effectiveness or to risks of medical error) or overprotected from learning activities that are of low risk . . . and that stand to contribute to improving health care safety, effectiveness, and value."

The authors call for a new ethical framework that "is commensurate with the risk and burden in both realms." Their second article outlines such a framework for determining the type and level of oversight needed for a learning health care system. The basic structure consists of seven obligations: 1) to respect the rights and dignity of patients; 2) to respect the clinical judgment of clinicians; 3) to provide optimal care to each patient; 4) to avoid imposing nonclinical risks and burdens on patients; 5) to reduce health inequalities among populations; 6) to conduct responsible activities that foster learning from clinical care and clinical information; and 7) to contribute to the common purpose of improving the quality and value of clinical care and the health system. The first six obligations would be the responsibility of researchers, clinicians, health care systems administrators, payers, and purchasers. The seventh obligation would be borne by patients.

Authors of the feature articles are Nancy E. Kass, deputy director for public health in the Johns Hopkins Berman Institute of Bioethics; Ruth R. Faden, director of the Johns Hopkins Berman Institute of Bioethics; Steven N. Goodman, associate dean for clinical and translational research at the Stanford University School of Medicine; Peter Pronovost, director of the Armstrong Institute for Patient Safety and Quality at Johns Hopkins; Sean Tunis, founder, president, and chief executive officer of the Center for Medical Technology Policy in Baltimore; and Tom L. Beauchamp, a professor of philosophy at Georgetown University and a senior research scholar at the Kennedy Institute of Ethics. Beauchamp was a chief architect of the Belmont Report, which established the existing research ethics framework in the United States.

The commentaries on the articles find common cause with the need to update clinical oversight for learning health care systems, but offer important critiques of the proposed framework. In particular, some hold that the research-treatment distinction is still useful and are concerned that the obligation for patients to participate in quality improvement efforts would exempt too many studies from voluntary informed consent and IRB protections.

Wednesday, May 29, 2013

The SUPPORT Study and the Standard of Care Read

By Lois Shepherd
The Hastings Center Bioethics Forum
Originally posted May 17, 2013

The clinical research community and a number of prominent bioethicists have swiftly come to the defense of investigators conducting the SUPPORT study, in which approximately 1,300 premature infants were randomly assigned to be maintained at higher or lower levels of oxygen saturation. The study took place between 2005 and 2009, involved 22 sites and was reviewed by at least as many institutional review boards. In March, the Office of Human Research Protection (OHRP) concluded that investigators had violated the informed consent provisions of the federal regulations governing research by failing to inform parents of infants enrolled in the study about risks of retinopathy, neurological injury, and death. Results from the study revealed that infants assigned to receive the lower range of oxygen suffered higher rates of death than infants assigned to the upper range, while the latter suffered higher rates of retinopathy than the former.  Defenders accuse OHRP of faulting the investigators for failing to inform parents of risks they learned about only through the study.

The central point of disagreement between defenders and critics of the study appears to be whether participants in the study were receiving medical care that was different from the care they would have received outside the study and whether participation in research therefore carried any medical risks that required risk/benefit scrutiny by IRBs or disclosure to parents of the infants enrolled.  This would appear to be a factual matter about which one could obtain some clarity, but discussions of this issue have been somewhat opaque.

Part of the reason for this may be different, but unacknowledged, understandings of the concept of “standard of care,” a term used – both in the informed consent forms and the commentary about them – but rarely defined in this debate.

The entire article is here.

Friday, September 7, 2012

Do Post-Market Drug Trials Need a Higher Dose of Ethics?

Patients who sign up for trials testing more than one already approved intervention do not always know if one is being tested for harmful side effects

By Katherine Harmon
Scientific American
Originally published August 23, 2012

Here is an excerpt:

What you might not know—even after you sign up for the trial and have inked the informed-consent form—is that scattered reports are starting to suggest that the new medication might occasionally cause severe side effects. And the real reason the trial is being conducted with these previously released drugs is to test whether the new medication really is a lot riskier to everyone or just to a subset of patients.

If you found that out, would you still sign up for the trial? The problem is that many patients—and often even the institutional review boards that approve the trials—are never informed of these lingering questions.

This is one of the big ethical holes often left open in post-market trials, says Ruth Faden, director of the Johns Hopkins Berman Institute of Bioethics, who co-authored a new essay on this topic in The New England Journal of Medicine, which was published online August 22. She and a team of co-authors released a formal Institute of Medicine (IOM) report earlier this year recommending that the FDA improve this and other ethical aspects of post-market trials—especially those it requires.

Tuesday, July 10, 2012

Justice for Injured Research Subjects

By Carl Elliott, MD, PhD
The New England Journal of Medicine-Perspective
Originally published July 5, 2012

Critics have long argued that U.S. ethics guidelines protect researchers more than they protect research subjects. The U.S. system of oversight, writes Laura Stark, was developed as a “technique for promoting research and preventing lawsuits.” Consider, for example, the obligations of U.S. research sponsors when a study goes wrong. If a research subject is seriously injured, neither the researcher nor the sponsor has any legal obligation to pay for that subject's medical care. In fact, only 16% of academic medical centers in the United States make it a policy to pay for the care of injured subjects. If a subject is permanently disabled and unable to work, sponsors have no obligation to pay compensation for his or her lost income. If a subject dies, sponsors have no financial obligations to his or her family. Not a single academic medical center in the United States makes it a policy to compensate injured subjects or their families for lost wages or suffering. These policies do not change even if a subject is injured in a study that is scientifically worthless, deceptive, or exploitative.


Thanks to Gary Schoener for this information.

Thursday, March 1, 2012

Money, Coercion, and Undue Inducement: Attitudes about Payments to Research Participants


Emily A. Largent, Christine Grady, Franklin G. Miller, and Alan Wertheimer, "Money, Coercion, and Undue Inducement: Attitudes about Payments to Research Participants," IRB: Ethics & Human Research 34, no. 1 (2012): 1-8.


Using payment to recruit research subjects is a common practice, but it raises ethical concerns that coercion or undue inducement could potentially compromise participants’ informed consent. This is the first national study to explore the attitudes of IRB members and other human subjects protection professionals concerning whether payment of research participants constitutes coercion or undue influence, and if so, why. The majority of respondents expressed concern that payment of any amount might influence a participant’s decisions or behaviors regarding research participation. Respondents expressed greater acceptance of payment as reimbursement or compensation than as an incentive to participate in research, and most agreed that subjects are coerced if the offer of payment makes them participate when they otherwise would not or when the offer of payment causes them to feel that they have no reasonable alternative but to participate (82%). Views about undue influence were similar. We conclude that human subjects protection professionals hold expansive and inconsistent views about coercion and undue influence that may interfere with the recruitment of research participants and impede valuable research.