Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Human Research. Show all posts
Showing posts with label Human Research. Show all posts

Friday, September 1, 2017

A Plutocratic Proposal: an ethical way for rich patients to pay for a place on a clinical trial

Alexander Masters and Dominic Nutt
Journal of Medical Ethics 
Published Online First: 06 June 2017.

Abstract

Many potential therapeutic agents are discarded before they are tested in humans. These are not quack medications. They are drugs and other interventions that have been developed by responsible scientists in respectable companies or universities and are often backed up by publications in peer-reviewed journals. These possible treatments might ease suffering and prolong the lives of innumerable patients, yet they have been put aside. In this paper, we outline a novel mechanism—the Plutocratic Proposal—to revive such neglected research and fund early phase clinical trials. The central idea of the Proposal is that any patient who rescues a potential therapeutic agent from neglect by funding early phase clinical trials (either entirely or in large part) should be offered a place on the trial.

The article is here.

Sunday, October 2, 2016

Why you should worry about the privatization of genetic data

Kayte Spector-Bagdady
The Conversation
Originally posted September 8, 2016

Here is an excerpt:

But genetic data banks amassed by private companies don’t necessarily have to follow the same regulations regarding access to their data that federally funded researchers do. And a recent proposal to change consent regulations for human research may make it cheaper for private companies to collect and use this data than public ones.

As bioethicists (myself included) have warned, we need to pay attention to concerns about how these private genetic data banks are used and accessed before we enable a system where the future of public genetic research lies in private hands.

The article is here.

Wednesday, March 26, 2014

The Fat Drug

By Pagan Kennedy
The New York Times
Originally published March 8, 2014

Here is an excerpt:

Nonetheless, experiments were then being conducted on humans. In the 1950s, a team of scientists fed a steady diet of antibiotics to schoolchildren in Guatemala for more than a year,while Charles H. Carter, a doctor in Florida, tried a similar regimen on mentally disabled kids. Could the children, like the farm animals, grow larger? Yes, they could.

Mr. Jukes summarized Dr. Carter’s research in a monograph on nutrition and antibiotics: “Carter carried out a prolonged investigation of a study of the effects of administering 75 mg of chlortetracycline” — the chemical name for Aureomycin — “twice daily to mentally defective children for periods of up to three years at the Florida Farm Colony. The children were mentally deficient spastic cases and were almost entirely helpless,” he wrote. “The average yearly gain in weight for the supplemented group was 6.5 lb while the control group averaged 1.9 lb in yearly weight gain.”

The entire article is here.

Saturday, June 22, 2013

The Value of Role Reversal

Guest Post by Rebecca Dresser, Washington
BMJ Group Blogs
Originally posted on June 20, 2013

"Not so long ago, medical researchers had a habit of using themselves as guinea pigs.  Many scientists saw self-experimentation as the most ethical way to try out their ideas.  By going first, researchers could test their hypotheses and see how novel interventions affected human beings.

Today we rely on a more systematic process to decide when to begin human testing, with experts and ethicists evaluating when a trial is justified.  But a modified version of self-experimentation still makes sense.

People who conduct human research, as well as those serving on research ethics boards, can learn a lot from volunteering for studies.  Just as doctors learn from personal experience as patients, scientists and ethicists learn from personal experience as subjects.

Looking at study requirements and the consent process from the subject’s point of view can be quite educational.  I discovered this myself when I was given the option of enrolling in a cancer treatment trial.  I had never before realized that enrolling in a trial can delay the start of treatment, because of the extra appointments and procedures research enrollment can require.  Nor had I realized that because cancer trials take years to finish, subjects in those trials may lose an opportunity to receive new drugs that emerge during that time.  I’ve spent three decades writing about research ethics and serving on research review boards, but I learned new things once I had to decide whether to become a subject myself.

No one should be forced to participate in research, of course.  But I encourage research professionals to consider becoming subjects themselves (not necessarily in their own trials, but in studies conducted by others).  This modern version of self-experimentation might give researchers and ethicists a better sense of what people need to know before enrolling in a study.  It might also give scientists and review committees a deeper understanding of the risks, inconveniences, and benefits that subjects experience in research."

Rebecca’s paper “Personal Knowledge and Study Participation” is now available online first here.

Monday, June 3, 2013

Experts propose overhaul of ethics oversight of research

The Hastings Center
Press Release
Originally released January 2013

Hastings Center Special Report aims to 'provoke a national conversation'

The longstanding ethical framework for protecting human volunteers in medical research needs to be replaced because it is outdated and can impede efforts to improve health care quality, assert leaders in bioethics, medicine, and health policy in two companion articles in a Hastings Center Report special report, "Ethical Oversight of Learning Health Care Systems." One of the authors calling for a new approach is the main architect of the current ethical framework.

Seven commentaries in the publication, written by leaders with national responsibility for ethical oversight of medical research and efforts to improve health care quality, find areas of agreement and offer critiques.

In an accompanying editorial, co-guest editors Mildred Z. Solomon, President of The Hastings Center and Ann C. Bonham, Chief Scientific Officer at the American Association of Medical Colleges, wrote that by inviting these commentaries, they aimed to "provoke a national conversation." According to Solomon, "The challenge is to design oversight that adequately protects patients without impeding the kinds of data collection activities we need to improve health care quality, reduce disparities, and bring down our rate of medical errors." (See video of Dr. Solomon on the importance of this debate.)

For nearly four decades, protection of human participants in medical research has been based on the premise that there is a clear line between medical research and medical treatment. But, the two feature articles argue, that distinction has become blurred now that health care systems across the country are beginning to collect data from patients when they come in for treatment or follow-up. The Institute of Medicine has recommended that health care organizations do this kind of research, calling on them to become "learning health care systems."

In particular, the articles challenge the prevailing view that participating in medical research is inherently riskier and provides less benefit than receiving medical care. They point out that more than half of medical treatments lack evidence of effectiveness, putting patients at risk of harm. On the other hand, some kinds of clinical research are no riskier than clinical care and are potentially more beneficial; an example is comparative effectiveness research to find out which of two or more widely used interventions for a particular condition works best for which patients.

"Relying on this faulty research-practice distinction as the criterion that triggers ethical oversight has resulted in two major problems," the authors write. First, it has led to "delays, confusion, and frustrations in the regulatory environment" when institutional review boards, which are responsible for the ethical oversight of research with human subjects, have difficulty distinguishing between research and clinical practice. Second, it has "resulted in a morally questionable public policy in which many patients are either underprotected from clinical practice risks (when exposed to interventions of unproven effectiveness or to risks of medical error) or overprotected from learning activities that are of low risk . . . and that stand to contribute to improving health care safety, effectiveness, and value."

The authors call for a new ethical framework that "is commensurate with the risk and burden in both realms." Their second article outlines such a framework for determining the type and level of oversight needed for a learning health care system. The basic structure consists of seven obligations: 1) to respect the rights and dignity of patients; 2) to respect the clinical judgment of clinicians; 3) to provide optimal care to each patient; 4) to avoid imposing nonclinical risks and burdens on patients; 5) to reduce health inequalities among populations; 6) to conduct responsible activities that foster learning from clinical care and clinical information; and 7) to contribute to the common purpose of improving the quality and value of clinical care and the health system. The first six obligations would be the responsibility of researchers, clinicians, health care systems administrators, payers, and purchasers. The seventh obligation would be borne by patients.

Authors of the feature articles are Nancy E. Kass, deputy director for public health in the Johns Hopkins Berman Institute of Bioethics; Ruth R. Faden, director of the Johns Hopkins Berman Institute of Bioethics; Steven N. Goodman, associate dean for clinical and translational research at the Stanford University School of Medicine; Peter Pronovost, director of the Armstrong Institute for Patient Safety and Quality at Johns Hopkins; Sean Tunis, founder, president, and chief executive officer of the Center for Medical Technology Policy in Baltimore; and Tom L. Beauchamp, a professor of philosophy at Georgetown University and a senior research scholar at the Kennedy Institute of Ethics. Beauchamp was a chief architect of the Belmont Report, which established the existing research ethics framework in the United States.

The commentaries on the articles find common cause with the need to update clinical oversight for learning health care systems, but offer important critiques of the proposed framework. In particular, some hold that the research-treatment distinction is still useful and are concerned that the obligation for patients to participate in quality improvement efforts would exempt too many studies from voluntary informed consent and IRB protections.

Monday, March 18, 2013

Overhaul of Rules for Human Research Hits Impasse

By Paul Basken
The Chronicle of Higher Education
Originally published March 7, 2013

After months of trying to reconcile the sometimes competing goals of making the rules both simpler and tougher, while engaging 17 different federal agencies affected by the Common Rule, participants are describing the process as stalemated.

"I think it's dead, pretty much," said E. Greg Koski, a former director of the human-research-protections office, reflecting assessments he's heard from key players in the process.

The office has a published timetable suggesting it will formally propose a new set of regulations next month. In a written statement, the current director of the Office for Human Research Protections, Jerry A. Menikoff, said he intended to keep trying.

"This is, of course, a complicated undertaking, as was stated from the outset, and it takes time," Dr. Menikoff said.

The entire story, including The Chronicle's paywall, is here.