Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label human subjects protection. Show all posts
Showing posts with label human subjects protection. Show all posts

Sunday, January 29, 2023

UCSF Issues Report, Apologizes for Unethical 1960-70’s Prison Research

Restorative Justice Calls for Continued Examination of the Past

Laura Kurtzman
Press Release
Originally posted 20 DEC 22

Recognizing that justice, healing and transformation require an acknowledgment of past harms, UCSF has created the Program for Historical Reconciliation (PHR). The program is housed under the Office of the Executive Vice Chancellor and Provost, and was started by current Executive Vice Chancellor and Provost, Dan Lowenstein, MD.

The program’s first report, released this month, investigates experiments from the 1960s and 1970s involving incarcerated men at the California Medical Facility (CMF) in Vacaville. Many of these men were being assessed or treated for psychiatric diagnoses.

The research reviewed in the report was performed by Howard Maibach, MD, and William Epstein, MD, both faculty in UCSF’s Department of Dermatology. Epstein was a former chair of the department who died in 2006. The committee was asked to focus on the work of Maibach, who remains an active member of the department.

Some of the experiments exposed research subjects to pesticides and herbicides or administered medications with side effects. In all, some 2,600 incarcerated men were experimented on.

The men volunteered for the studies and were paid for participating. But the report raises ethical concerns over how the research was conducted. In many cases there was no record of informed consent. The subjects also did not have any of the medical conditions that any of the experiments could have potentially treated or ameliorated.

Such practices were common in the U.S. at the time and were increasingly being criticized both by experts and in the lay press. The research continued until 1977, when the state of California halted all human subject research in state prisons, a year after the federal government did the same.

The report acknowledges that Maibach was working during a time when the governance of human subjects research was evolving, both at UCSF and at institutions across the country. Over a six-month period, the committee gathered some 7,000 archival documents, medical journal articles, interviews, documentaries and books, much of which has yet to be analyzed. UCSF has acknowledged that it may issue a follow-up report.

The report found that “Maibach practiced questionable research methods. Archival records and published articles have failed to show any protocols that were adopted regarding informed consent and communicating research risks to participants who were incarcerated.”

In a review of publications between 1960 and 1980, the committee found virtually all of Maibach’s studies lacked documentation of informed consent despite a requirement for formal consent instituted in 1966 by the newly formed Committee on Human Welfare and Experimentation. Only one article, published in 1975, indicated the researchers had obtained informed consent as well as approval from UCSF’s Committee for Human Research (CHR), which began in 1974 as a result of new federal requirements.


Wednesday, November 2, 2022

How the Classics Changed Research Ethics

Scott Sleek
Psychological Science
Originally posted 31 AUG 22

Here is an excerpt:

Social scientists have long contended that the Common Rule was largely designed to protect participants in biomedical experiments—where scientists face the risk of inducing physical harm on subjects—but fits poorly with the other disciplines that fall within its reach.

“It’s not like the IRBs are trying to hinder research. It’s just that regulations continue to be written in the medical model without any specificity for social science research,” she explained. 

The Common Rule was updated in 2018 to ease the level of institutional review for low-risk research techniques (e.g., surveys, educational tests, interviews) that are frequent tools in social and behavioral studies. A special committee of the National Research Council (NRC), chaired by APS Past President Susan Fiske, recommended many of those modifications. Fisher was involved in the NRC committee, along with APS Fellows Richard Nisbett (University of Michigan) and Felice J. Levine (American Educational Research Association), and clinical psychologist Melissa Abraham of Harvard University. But the Common Rule reforms have yet to fully expedite much of the research, partly because the review boards remain confused about exempt categories, Fisher said.  

Interference or support? 

That regulatory confusion has generated sour sentiments toward IRBs. For decades, many social and behavioral scientists have complained that IRBs effectively impede scientific progress through arbitrary questions and objections. 

In a Perspectives on Psychological Science paper they co-authored, APS Fellows Stephen Ceci of Cornell University and Maggie Bruck of Johns Hopkins University discussed an IRB rejection of their plans for a study with 6- to 10-year-old participants. Ceci and Bruck planned to show the children videos depicting a fictional police officer engaging in suggestive questioning of a child.  

“The IRB refused to approve the proposal because it was deemed unethical to show children public servants in a negative light,” they wrote, adding that the IRB held firm on its rejection despite government funders already having approved the study protocol (Ceci & Bruck, 2009). 

Other scientists have complained the IRBs exceed their Common Rule authority by requiring review of studies that are not government funded. In 2011, psychological scientist Jin Li sued Brown University in federal court for barring her from using data she collected in a privately funded study on educational testing. Brown’s IRB objected to the fact that she paid her participants different amounts of compensation based on need. (A year later, the university settled the case with Li.) 

In addition, IRBs often hover over minor aspects of a study that have no genuine relation to participant welfare, Ceci said in an email interview.  

Wednesday, September 6, 2017

The Nuremberg Code 70 Years Later

Jonathan D. Moreno, Ulf Schmidt, and Steve Joffe
JAMA. Published online August 17, 2017.

Seventy years ago, on August 20, 1947, the International Medical Tribunal in Nuremberg, Germany, delivered its verdict in the trial of 23 doctors and bureaucrats accused of war crimes and crimes against humanity for their roles in cruel and often lethal concentration camp medical experiments. As part of its judgment, the court articulated a 10-point set of rules for the conduct of human experiments that has come to be known as the Nuremberg Code. Among other requirements, the code called for the “voluntary consent” of the human research subject, an assessment of risks and benefits, and assurances of competent investigators. These concepts have become an important reference point for the ethical conduct of medical research. Yet, there has in the past been considerable debate among scholars about the code’s authorship, scope, and legal standing in both civilian and military science. Nonetheless, the Nuremberg Code has undoubtedly been a milestone in the history of biomedical research ethics.1- 3

Writings on medical ethics, laws, and regulations in a number of jurisdictions and countries, including a detailed and sophisticated set of guidelines from the Reich Ministry of the Interior in 1931, set the stage for the code. The same focus on voluntariness and risk that characterizes the code also suffuses these guidelines. What distinguishes the code is its context. As lead prosecutor Telford Taylor emphasized, although the Doctors’ Trial was at its heart a murder trial, it clearly implicated the ethical practices of medical experimenters and, by extension, the medical profession’s relationship to the state understood as an organized community living under a particular political structure. The embrace of Nazi ideology by German physicians, and the subsequent participation of some of their most distinguished leaders in the camp experiments, demonstrates the importance of professional independence from and resistance to the ideological and geopolitical ambitions of the authoritarian state.

The article is here.

Tuesday, June 6, 2017

Some Social Scientists Are Tired of Asking for Permission

Kate Murphy
The New York Times
Originally published May 22, 2017

Who gets to decide whether the experimental protocol — what subjects are asked to do and disclose — is appropriate and ethical? That question has been roiling the academic community since the Department of Health and Human Services’s Office for Human Research Protections revised its rules in January.

The revision exempts from oversight studies involving “benign behavioral interventions.” This was welcome news to economists, psychologists and sociologists who have long complained that they need not receive as much scrutiny as, say, a medical researcher.

The change received little notice until a March opinion article in The Chronicle of Higher Education went viral. The authors of the article, a professor of human development and a professor of psychology, interpreted the revision as a license to conduct research without submitting it for approval by an institutional review board.

That is, social science researchers ought to be able to decide on their own whether or not their studies are harmful to human subjects.

The Federal Policy for the Protection of Human Subjects (known as the Common Rule) was published in 1991 after a long history of exploitation of human subjects in federally funded research — notably, the Tuskegee syphilis study and a series of radiation experiments that took place over three decades after World War II.

The remedial policy mandated that all institutions, academic or otherwise, establish a review board to ensure that federally funded researchers conducted ethical studies.

The article is here.

Thursday, February 23, 2017

Equipoise in Research: Integrating Ethics and Science in Human Research

Alex John London
JAMA. 2017;317(5):525-526. doi:10.1001/jama.2017.0016

The principle of equipoise states that, when there is uncertainty or conflicting expert opinion about the relative merits of diagnostic, prevention, or treatment options, allocating interventions to individuals in a manner that allows the generation of new knowledge (eg, randomization) is ethically permissible. The principle of equipoise reconciles 2 potentially conflicting ethical imperatives: to ensure that research involving human participants generates scientifically sound and clinically relevant information while demonstrating proper respect and concern for the rights and interests of study participants.

The article is here.

Tuesday, January 12, 2016

Your Cells. Their Research. Your Permission?

By Rebecca Skloot
The New York Times
Originally posted December 30, 2015

Here are two excerpts:

What’s riding on this? Maybe the future of human health. We’re in the era of precision medicine, which relies on genetic and other personal information to develop individualized treatments. Those advances depend on scientists working with vast amounts of human tissue and DNA. Dr. Francis S. Collins, director of the National Institutes of Health, believes involving donors in this process gives scientists more useful information, and can be life-changing for donors. In announcing plans for the $215 million Precision Medicine Initiative, which he sees as a model for other future research, Dr. Collins said, “Participants will be partners in research, not subjects.” But people can be partners only if they know they’re participating.

(cut)

People have told me by the thousands, and numerous public opinion studies find the same: They want to know if their biospecimens are used in research, and they want to be asked first. Most will probably say yes, because they understand it’s important. They just don’t want to find out later. That damages their trust in science and doctors. It makes them wonder, what else are you hiding from me?

People tell me this because I wrote a book about Henrietta Lacks, a black tobacco farmer whose cancer cells, taken without her knowledge in 1951, are still alive in laboratories worldwide. Those cells, code-named HeLa, were the first such cells grown and one of the most important advances in medicine. But they came with troubling consequences: Her children were later used in research, their medical information was published, and the HeLa genome — including personal information about Mrs. Lacks and potentially her descendants — was sequenced and posted online. All without the family’s knowledge.

The article is here.

Friday, December 11, 2015

A Controversial Rewrite For Rules To Protect Humans In Experiments

By Rob Stein
NPR Morning Edition
Originally posted November 25, 2015

Throughout history, atrocities have been committed in the name of medical research.

Nazi doctors experimented on concentration camp prisoners. American doctors let poor black men with syphilis go untreated in the Tuskegee study. The list goes on.

To protect people participating in medical research, the federal government decades ago put in place strict rules on the conduct of human experiments.

Now the Department of Health and Human Services is proposing a major revision of these regulations, known collectively as the Common Rule. It's the first change proposed in nearly a quarter-century.

"We're in a very, very different world than when these regulations were first written," says Dr. Jerry Menikoff, who heads the HHS Office of Human Research Protections. "The goal is to modernize the rules to make sure terrible things don't happen."

The article and audio file are here.

Tuesday, September 1, 2015

The moral naivete of ethics by numbers

By Susan Dwyer
Aljazeera America
Originally posted August 13, 2015

What do bioethicists do? According to a recent Boston Globe op-ed by the Harvard cognitive psychologist Steven Pinker, they needlessly get in the way of saving and improving human lives by throwing up ethical red tape and slowing the speed of research, and in so doing, they undermine their right to call themselves ethicists at all.

In principle, it is correct that if 250,000 people die each year of a disease that is potentially treatable, the cost of every year’s delay in research is 250,000 lives. And it is certainly terrible to lose so many people to unnecessary delays. But Pinker doesn’t cite a single specific example in which bioethical scrutiny has produced such a result. Certainly, the withholding of experimental drugs has cost lives; for example, ZMapp, an experimental drug to treat Ebola, was not readily available to people in several African nations who were dying of the disease. Yet there was little of the drug on hand, in any case. But the problem here was not ethical red tape; it was the underfunding of research to treat “exotic” infectious disease.

The entire article is here.

Thursday, August 27, 2015

Steven Pinker is right about biotech and wrong about bioethics

Bill Gardner
The Incidental Economist
Originally published August 7, 2015

Here is an excerpt:

First, even by newspaper op-ed standards this is lazily argued. Pinker attributes a host of opinions to bioethicists without quoting any bioethicist. He does not cite any cases to document that bioethicists’ concerns about long term consequences have impeded research and caused harms. There likely are such cases, but he writes as if they are common. I served for years on the University of Pittsburgh IRB. For better or worse, the long term risks of biomedical research were never even discussed.

Worse, Pinker brackets “dignity” and “social justice”* in sneer quotes, as if it were self-evident that affronts to these values do not fall into the class of “identifiable harms” and as if these concerns can be dismissed without any actual argument. The only normative framework that has weight, by his lights, are the mortality and morbidity of disease. Of course mortality and morbidity are exceptionally important. But if that is the only framework that matters to Pinker he is in a very small minority.

The entire critique is here.

Thursday, July 2, 2015

CIA torture appears to have broken spy agency rule on human experimentation

By Spencer Ackerman
The Guardian
Originally posted June 15, 2015

The Central Intelligence Agency had explicit guidelines for “human experimentation” – before, during and after its post-9/11 torture of terrorism detainees – that raise new questions about the limits on the agency’s in-house and contracted medical research.

Sections of a previously classified CIA document, made public by the Guardian on Monday, empower the agency’s director to “approve, modify, or disapprove all proposals pertaining to human subject research”. The leeway provides the director, who has never in the agency’s history been a medical doctor, with significant influence over limitations the US government sets to preserve safe, humane and ethical procedures on people.

CIA director George Tenet approved abusive interrogation techniques, including waterboarding, designed by CIA contractor psychologists. He further instructed the agency’s health personnel to oversee the brutal interrogations – the beginning of years of controversy, still ongoing, about US torture as a violation of medical ethics.

The entire article is here.

Tuesday, May 12, 2015

A Drug Trial’s Frayed Promise

By Katie Thomas
The New York Times
Originally published April 17, 2015

Here is an excerpt:

The University of Minnesota’s clinical trial practices are now under intense scrutiny. In February, a panel of outside experts excoriated the university for failing to properly oversee clinical trials and for paying inadequate attention to the protection of vulnerable subjects. The review, commissioned by the university after years of criticism of its research practices, singled out Dr. Schulz and his department of psychiatry, describing “a culture of fear” that pervaded the department.

In March, after another critical report by Minnesota’s legislative auditor, the university announced that it would halt all drug trials being conducted by the psychiatry department until outside experts could review them. And this month, the university announced that Dr. Schulz would step down as head of the psychiatry department. The dean of the medical school, Dr. Brooks Jackson, said in a statement to reporters that Dr. Schulz’s decision “was completely his own” and that he would “remain a valued member of our faculty.”

The entire article is here.

Tuesday, May 5, 2015

Markingson case: University of Minnesota can't regain trust under current leadership

By Arne H. Carlson
The Star Tribune
Originally published April 10, 2015

Here is an excerpt:

Ever since the violent suicide of Dan Markingson in 2004, the administration of the University of Minnesota has received repeated calls for the release of more details about the care and protection afforded the victim. These calls have come from faculty members at the university, from local community members and from researchers from around the world. But instead of being transparent and forthright, the administration created a standard response similar to that expressed by the university’s former general counsel, Mark Rotenberg: “As we’ve stated previously, the Markingson case has been exhaustively reviewed by Federal, State and academic bodies since 2004. The FDA, the Hennepin County District Court, the Minnesota Board of Medical Practice, the Minnesota Attorney General’s office and the University’s Institutional Review Board have all reviewed the case. None found fault with any of our faculty.”

The entire article is here.

Tuesday, March 24, 2015

UMN research review finds inadequate protections

By Josh Verges
twincities.com
Originally posted February 27m 2015

A decade after a psychiatric patient's suicide, the University of Minnesota still fails on several fronts to protect vulnerable human research subjects.

That's the finding of an external review ordered by President Eric Kaler last year and made public Friday. It raises serious questions about the authorization of and oversight for U research, especially in the Department of Psychiatry.

Questions about recruitment, consent and treatment have persisted since a 2008 Pioneer Press series concerning the 2004 death of Dan Markingson, an antipsychotic drug research subject.

The entire article is here.

Thursday, November 14, 2013

How medical researchers become morally entangled

By Henry S. Richardson
Oxford University Press Blog
Originally published October 27, 2013

A huge amount of ethical angst swirls around the topic of informed consent. Can lay people who are considering signing up as subjects in a medical study really be made to understand the risks they are facing? Can information about these risks be communicated across cultural and educational gulfs? What degree of informed understanding should we expect subjects to have, anyway? 

Underlying the process of informed consent, though, is a simpler and more fundamental issue. The one-sided focus of the medical-research ethics establishment on preventing harms and abuses has obscured this core function from view. We need to remember why consent is needed for participation in medical research in the first place. It is needed because the researchers need the subjects’ permission to do things that otherwise would be wrong to do. It is wrong to examine and touch people’s naked bodies in the ways researchers need to do, to collect samples of their blood, urine, and feces, and to collect detailed information on their medical histories without getting their permission.

The entire story is here.

Monday, November 4, 2013

World Medical Association Declaration of Helsinki

Ethical Principles for Medical Research Involving Human Subjects
World Medical Association
JAMA. Published online October 19, 2013. doi:10.1001/jama.2013.281053

Preamble

1. The World Medical Association (WMA) has developed the Declaration of Helsinki as a statement of ethical principles for medical research involving human subjects, including research on identifiable human material and data.

The Declaration is intended to be read as a whole and each of its constituent paragraphs should be applied with consideration of all other relevant paragraphs.

2. Consistent with the mandate of the WMA, the Declaration is addressed primarily to physicians. The WMA encourages others who are involved in medical research involving human subjects to adopt these principles.

The entire document is here.

Wednesday, June 26, 2013

Committee supervises ethics of human testing

By Madison Pauly
The Dartmouth
Published on Monday, February 25, 2013

From new cardiology studies to students that go overseas and want to interview people, the Committee for the Protection of Human Subjects answers ethical questions about human research at Dartmouth. The committee is an interdisciplinary group of experts and community members who analyze the risk posed to participants by Dartmouth-affiliated researchers’ studies.

As Dartmouth’s incarnation of a federally-mandated institutional review board, the committee analyzes proposals for research on human subjects from Dartmouth-Hitchcock Medical Center and the Veterans Affairs Medical Center in White River Junction, as well as the College’s graduate and undergraduate departments.

While all studies involving human participants are subject to review by the committee, those that receive funding from sources other than the government must pay a review fee to the committee office.

A division of the Provost’s Office, the committee is financed in part by federal funds allocated to Dartmouth for research. Accordingly, its review process follows federal policies to ensure “respect for persons, beneficence and justice,” according to a Department of Health and Human Services report.

Major areas of ethical concern include the research’s medical relevance, involvement of vulnerable populations, its informed consent process and use of deception, said assistant provost for research Liz Bankert, a member of the committee.

The current federal regulations were last revised in 1991 and often fail to give adequate ethical guidance on modern research questions, said Bankert, who also serves on a national research ethics advisory committee.

The entire story is here.

Monday, June 10, 2013

Human Subjects Research Landscape Project – Analysis Dataset

The Presidential Commission for the Study of Bioethics

In order to respond to President Obama’s November 24, 2010 charge “to determine if Federal regulations and international standards adequately guard the health and well-being of participants in scientific studies supported by the Federal Government,” the Commission recognized that a critical first step would be to define and understand the landscape of “scientific studies supported by the Federal Government.” Finding no comprehensive publicly available source for this information, the Commission asked the 18 federal departments and agencies that have adopted the Common Rule—and therefore were likely to support scientific studies with human subjects—to provide basic project-level data for department/agency-supported human subjects research in Fiscal Year 2006 to Fiscal Year 2010.

These data, which include study title, number and location of sites, number of subjects, and funding information, were compiled into the Commission’s Research Project Database, and analyzed as part of its Human Subjects Research Landscape Project.

Posted here is the Commission’s analysis dataset, which incorporates minimal data cleaning as detailed in “Appendix II: Human Subjects Research Landscape Project Methods.”  Also posted is a data dictionary that defines the dataset’s fields. The data are available in two formats: Microsoft Access and .CSV.  The Access file contains the same information as the three .CSV files.

As detailed in the Methods, department/agency-reported information in the dataset was not independently audited or verified.  Moreover, the dataset is static; no additional data will be added to it.

For further information, and to read the Human Subjects Research Landscape Project Methods, please see “Appendix I: Human Subjects Research Landscape Project: Scope and Volume of Federally Supported Human Subjects Research” and “Appendix II: Human Subjects Research Landscape Project Methods.”

The entire study is here.

Monday, June 3, 2013

Experts propose overhaul of ethics oversight of research

The Hastings Center
Press Release
Originally released January 2013

Hastings Center Special Report aims to 'provoke a national conversation'

The longstanding ethical framework for protecting human volunteers in medical research needs to be replaced because it is outdated and can impede efforts to improve health care quality, assert leaders in bioethics, medicine, and health policy in two companion articles in a Hastings Center Report special report, "Ethical Oversight of Learning Health Care Systems." One of the authors calling for a new approach is the main architect of the current ethical framework.

Seven commentaries in the publication, written by leaders with national responsibility for ethical oversight of medical research and efforts to improve health care quality, find areas of agreement and offer critiques.

In an accompanying editorial, co-guest editors Mildred Z. Solomon, President of The Hastings Center and Ann C. Bonham, Chief Scientific Officer at the American Association of Medical Colleges, wrote that by inviting these commentaries, they aimed to "provoke a national conversation." According to Solomon, "The challenge is to design oversight that adequately protects patients without impeding the kinds of data collection activities we need to improve health care quality, reduce disparities, and bring down our rate of medical errors." (See video of Dr. Solomon on the importance of this debate.)

For nearly four decades, protection of human participants in medical research has been based on the premise that there is a clear line between medical research and medical treatment. But, the two feature articles argue, that distinction has become blurred now that health care systems across the country are beginning to collect data from patients when they come in for treatment or follow-up. The Institute of Medicine has recommended that health care organizations do this kind of research, calling on them to become "learning health care systems."

In particular, the articles challenge the prevailing view that participating in medical research is inherently riskier and provides less benefit than receiving medical care. They point out that more than half of medical treatments lack evidence of effectiveness, putting patients at risk of harm. On the other hand, some kinds of clinical research are no riskier than clinical care and are potentially more beneficial; an example is comparative effectiveness research to find out which of two or more widely used interventions for a particular condition works best for which patients.

"Relying on this faulty research-practice distinction as the criterion that triggers ethical oversight has resulted in two major problems," the authors write. First, it has led to "delays, confusion, and frustrations in the regulatory environment" when institutional review boards, which are responsible for the ethical oversight of research with human subjects, have difficulty distinguishing between research and clinical practice. Second, it has "resulted in a morally questionable public policy in which many patients are either underprotected from clinical practice risks (when exposed to interventions of unproven effectiveness or to risks of medical error) or overprotected from learning activities that are of low risk . . . and that stand to contribute to improving health care safety, effectiveness, and value."

The authors call for a new ethical framework that "is commensurate with the risk and burden in both realms." Their second article outlines such a framework for determining the type and level of oversight needed for a learning health care system. The basic structure consists of seven obligations: 1) to respect the rights and dignity of patients; 2) to respect the clinical judgment of clinicians; 3) to provide optimal care to each patient; 4) to avoid imposing nonclinical risks and burdens on patients; 5) to reduce health inequalities among populations; 6) to conduct responsible activities that foster learning from clinical care and clinical information; and 7) to contribute to the common purpose of improving the quality and value of clinical care and the health system. The first six obligations would be the responsibility of researchers, clinicians, health care systems administrators, payers, and purchasers. The seventh obligation would be borne by patients.

Authors of the feature articles are Nancy E. Kass, deputy director for public health in the Johns Hopkins Berman Institute of Bioethics; Ruth R. Faden, director of the Johns Hopkins Berman Institute of Bioethics; Steven N. Goodman, associate dean for clinical and translational research at the Stanford University School of Medicine; Peter Pronovost, director of the Armstrong Institute for Patient Safety and Quality at Johns Hopkins; Sean Tunis, founder, president, and chief executive officer of the Center for Medical Technology Policy in Baltimore; and Tom L. Beauchamp, a professor of philosophy at Georgetown University and a senior research scholar at the Kennedy Institute of Ethics. Beauchamp was a chief architect of the Belmont Report, which established the existing research ethics framework in the United States.

The commentaries on the articles find common cause with the need to update clinical oversight for learning health care systems, but offer important critiques of the proposed framework. In particular, some hold that the research-treatment distinction is still useful and are concerned that the obligation for patients to participate in quality improvement efforts would exempt too many studies from voluntary informed consent and IRB protections.

Friday, May 17, 2013

Reconsidering the Declaration of Helsinki

By Ezekiel J. Emanuel
The Lancet, Volume 381, Issue 9877, 
Pages 1532 - 1533, 4 May 2013
doi:10.1016/S0140-6736(13)60970-8

Next year will mark the 50th anniversary of the Declaration of Helsinki. Consequently, the World Medical Association (WMA) is developing its eighth version of the Declaration. This anniversary presents an excellent opportunity to reconsider the problems of the Declaration and how they can be remedied to ensure the document retains its prominent status.

In 1964 when the Declaration of Helsinki was initially enacted, it contained 11 articles and 713 words. At that time, the Declaration was unique. Over the years, ethical guidance on research involving human participants has proliferated substantially to encompass the Belmont Report by the US National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research; the International Ethical Guidelines for Biomedical Research Involving Human Subjects of the Council for International Organizations of Medical Sciences; multiple laws and regulations, such as the US Federal Policy for the Protection of Human Subjects (known as the “Common Rule”, 45 CFR part 46) and the European Union's Clinical Trials Directive; and the eight principles of What Makes Research Ethical?. Simultaneously, the Declaration of Helsinki has been revised six times and tripled in size with its 35 articles and 2045 words. The revisions have often been extensive. For instance, the distinction between “clinical research combined with professional care” and “non-therapeutic clinical research” was eliminated after much withering criticism. The article that relates to use of placebos was revised and scaled back multiple times between 2000 and 2008.

Over the years problems with, and objections to, the document have accumulated. I propose that there are nine distinct problems with the current version of the Declaration of Helsinki: it has an incoherent structure; it confuses medical care and research; it addresses the wrong audience; it makes extraneous ethical provisions; it includes contradictions; it contains unnecessary repetitions; it uses multiple and poor phrasings; it includes excessive details; and it makes unjustified, unethical recommendations. For instance, the Declaration reads like a haphazard list of articles without an overall logical framework. The topics of articles 21 to 24 are literally a jumble: they cover the importance of the research outweighing research risks, the requirement for voluntary consent, the need to protect participants' privacy, and informed consent requirements for competent individuals, respectively.

The entire article is here.

Tuesday, July 10, 2012

Justice for Injured Research Subjects

By Carl Elliott, MD, PhD
The New England Journal of Medicine-Perspective
Originally published July 5, 2012

Critics have long argued that U.S. ethics guidelines protect researchers more than they protect research subjects. The U.S. system of oversight, writes Laura Stark, was developed as a “technique for promoting research and preventing lawsuits.” Consider, for example, the obligations of U.S. research sponsors when a study goes wrong. If a research subject is seriously injured, neither the researcher nor the sponsor has any legal obligation to pay for that subject's medical care. In fact, only 16% of academic medical centers in the United States make it a policy to pay for the care of injured subjects. If a subject is permanently disabled and unable to work, sponsors have no obligation to pay compensation for his or her lost income. If a subject dies, sponsors have no financial obligations to his or her family. Not a single academic medical center in the United States makes it a policy to compensate injured subjects or their families for lost wages or suffering. These policies do not change even if a subject is injured in a study that is scientifically worthless, deceptive, or exploitative.


Thanks to Gary Schoener for this information.