Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Institutional Review Board. Show all posts
Showing posts with label Institutional Review Board. Show all posts

Wednesday, November 2, 2022

How the Classics Changed Research Ethics

Scott Sleek
Psychological Science
Originally posted 31 AUG 22

Here is an excerpt:

Social scientists have long contended that the Common Rule was largely designed to protect participants in biomedical experiments—where scientists face the risk of inducing physical harm on subjects—but fits poorly with the other disciplines that fall within its reach.

“It’s not like the IRBs are trying to hinder research. It’s just that regulations continue to be written in the medical model without any specificity for social science research,” she explained. 

The Common Rule was updated in 2018 to ease the level of institutional review for low-risk research techniques (e.g., surveys, educational tests, interviews) that are frequent tools in social and behavioral studies. A special committee of the National Research Council (NRC), chaired by APS Past President Susan Fiske, recommended many of those modifications. Fisher was involved in the NRC committee, along with APS Fellows Richard Nisbett (University of Michigan) and Felice J. Levine (American Educational Research Association), and clinical psychologist Melissa Abraham of Harvard University. But the Common Rule reforms have yet to fully expedite much of the research, partly because the review boards remain confused about exempt categories, Fisher said.  

Interference or support? 

That regulatory confusion has generated sour sentiments toward IRBs. For decades, many social and behavioral scientists have complained that IRBs effectively impede scientific progress through arbitrary questions and objections. 

In a Perspectives on Psychological Science paper they co-authored, APS Fellows Stephen Ceci of Cornell University and Maggie Bruck of Johns Hopkins University discussed an IRB rejection of their plans for a study with 6- to 10-year-old participants. Ceci and Bruck planned to show the children videos depicting a fictional police officer engaging in suggestive questioning of a child.  

“The IRB refused to approve the proposal because it was deemed unethical to show children public servants in a negative light,” they wrote, adding that the IRB held firm on its rejection despite government funders already having approved the study protocol (Ceci & Bruck, 2009). 

Other scientists have complained the IRBs exceed their Common Rule authority by requiring review of studies that are not government funded. In 2011, psychological scientist Jin Li sued Brown University in federal court for barring her from using data she collected in a privately funded study on educational testing. Brown’s IRB objected to the fact that she paid her participants different amounts of compensation based on need. (A year later, the university settled the case with Li.) 

In addition, IRBs often hover over minor aspects of a study that have no genuine relation to participant welfare, Ceci said in an email interview.  

Monday, January 28, 2019

Second woman carrying gene-edited baby, Chinese authorities confirm

Zhou Xiaoqin, left, loads Cas9 protein and PCSK9 sgRNA molecules into a fine glass pipette as Qin Jinzhou watches at a laboratory in Shenzhen in southern ChinaAgence France-Presse
Originally posted January 21, 2019


A second woman became pregnant during the experiment to create the world’s first genetically edited babies, Chinese authorities have confirmed, as the researcher behind the claim faces a police investigation.

He Jiankui shocked the scientific community last year after announcing he had successfully altered the genes of twin girls born in November to prevent them contracting HIV.

He had told a human genome forum in Hong Kong there had been “another potential pregnancy” involving a second couple.

A provincial government investigation has since confirmed the existence of the second mother and that the woman was still pregnant, the official Xinhua news agency reported.

The expectant mother and the twin girls from the first pregnancy will be put under medical observation, an investigator told Xinhua.

The info is here.

Tuesday, January 22, 2019

Proceedings Start Against ‘Sokal Squared’ Hoax Professor

Katherine Mangan
The Chronicle of Higher Education
Originally posted January 7, 2019

Here is an excerpt:

The Oregon university’s institutional review board concluded that Boghossian’s participation in the elaborate hoax had violated Portland State’s ethical guidelines, according to documents Boghossian posted online. The university is considering a further charge that he had falsified data, the documents indicate.

Last month Portland State’s vice president for research and graduate studies, Mark R. McLellan, ordered Boghossian to undergo training on human-subjects research as a condition for getting further studies approved. In addition, McLellan said he had referred the matter to the president and provost because Boghossian’s behavior "raises ethical issues of concern."

Boghossian and his supporters have gone on the offensive with an online press kit that links to emails from Portland State administrators. It also includes a video filmed by a documentary filmmaker that shows Boghossian reading an email that asks him to appear before the institutional review board in October. In the video, Boghossian discusses the implications of potentially being found responsible for professional misconduct. He’s speaking with his co-authors, Helen Pluckrose, a self-described "exile from the humanities" who studies medieval religious writings about women, and James A. Lindsay, an author and mathematician.

The info is here.

Tuesday, August 21, 2018

Ethical Concerns Raised by Illicit Human Experiments

David Tereshchuk
Religion and Ethics - PBS.org
Originally posted July 16, 2018

Institutional regulation in science – including medical science – is undergoing one of its periodic assaults by proponents of greater freedom in research. These proponents argue (most of them in entirely good faith, I should stress) that experimentation is often needlessly hampered by too much official control. Formal constraints, they say, can cramp the kind of spontaneous improvisation that leads to unexpected, sometime spectacular, breakthroughs.

As reported by Marisa Taylor of Kaiser Health News, it has been revealed that the federal Food and Drug Administration (who won’t officially confirm this) is pursuing criminal inquiries into an egregious case of medical experimentation – conducted illicitly in off-shore locations and in hotel rooms on American soil.

The procedures under investigation were self-styled drug ‘trials’ – apparently a last-ditch effort by a university professor of microbiology, William Halford who – knowing he was dying from an incurable cancer – evidently threw both professional caution and ethics to the winds. He embarked hell-bent on a test-program for a herpes vaccine he’d invented, but for which he hadn’t gained FDA approval – a program that involved injecting it into human subjects.

The information is here.

Monday, July 2, 2018

What Does an Infamous Biohacker’s Death Mean for the Future of DIY Science?

Kristen Brown
The Atlantic
Originally posted May 5, 2018

Here are two excerpts:

At just 28, Traywick was among the most infamous figures in the world of biohacking—the grandiose CEO of a tiny company called Ascendance Biomedical whose goal was to develop and test new gene therapies without the expense and rigor of clinical trials or the oversight of the FDA. Traywick wanted to cure cancer, herpes, HIV, and even aging, and he wanted to do it without having to deal with the rules and safety precautions of regulators and industry standards.

“There are breakthroughs in the world that we can actually bring to market in a way that wouldn’t require us to butt up against the FDA’s walls, but instead walk around them,” Traywick told me the first time I met him in person, during a biotech conference in San Francisco last January.

To “walk around” regulators, Ascendance and other biohackers typically rely on testing products on themselves. Self-experimentation, although strongly discouraged by agencies like the FDA, makes it difficult for regulators to intervene. The rules that govern drug development simply aren’t written to oversee what an individual might do to themselves.

(cut)

The biggest shame, said Zayner, is that we’ll never get the chance to see how Traywick might have matured once he’d been in the biohacking sphere a little longer.

Whatever their opinion of Traywick, everyone who knew him agreed that he was motivated by an extreme desire to make drugs more widely available for those who need them.

The information is here.

Saturday, May 12, 2018

Bystander risk, social value, and ethics of human research

S. K. Shah, J. Kimmelman, A. D. Lyerly, H. F. Lynch, and others
Science 13 Apr 2018 : 158-159

Two critical, recurring questions can arise in many areas of research with human subjects but are poorly addressed in much existing research regulation and ethics oversight: How should research risks to “bystanders” be addressed? And how should research be evaluated when risks are substantial but not offset by direct benefit to participants, and the benefit to society (“social value”) is context-dependent? We encountered these issues while serving on a multidisciplinary, independent expert panel charged with addressing whether human challenge trials (HCTs) in which healthy volunteers would be deliberately infected with Zika virus could be ethically justified (1). Based on our experience on that panel, which concluded that there was insufficient value to justify a Zika HCT at the time of our report, we propose a new review mechanism to preemptively address issues of bystander risk and contingent social value.

(cut)

Some may object that generalizing and institutionalizing this approach could slow valuable research by adding an additional layer for review. However, embedding this process within funding agencies could preempt ethical problems that might otherwise stymie research. Concerns that CERCs might suffer from “mission creep” could be countered by establishing clear charters and triggers for deploying CERCs. Unlike IRBs, their opinions should be publicly available to provide precedent for future research programs or for IRBs evaluating particular protocols at a later date.

The information is here.

Wednesday, May 2, 2018

Institutional Research Misconduct Reports Need More Credibility

Gunsalus CK, Marcus AR, Oransky I.
JAMA. 2018;319(13):1315–1316.
doi:10.1001/jama.2018.0358

Institutions have a central role in protecting the integrity of research. They employ researchers, own the facilities where the work is conducted, receive grant funding, and teach many students about the research process. When questions arise about research misconduct associated with published articles, scientists and journal editors usually first ask the researchers’ institution to investigate the allegations and then report the outcomes, under defined circumstances, to federal oversight agencies and other entities, including journals.

Depending on institutions to investigate their own faculty presents significant challenges. Misconduct reports, the mandated product of institutional investigations for which US federal dollars have been spent, have a wide range of problems. These include lack of standardization, inherent conflicts of interest that must be addressed to directly ensure credibility, little quality control or peer review, and limited oversight. Even when institutions act, the information they release to the public is often limited and unhelpful.

As a result, like most elements of research misconduct, little is known about institutions’ responses to potential misconduct by their own members. The community that relies on the integrity of university research does not have access to information about how often such claims arise, or how they are resolved. Nonetheless, there are some indications that many internal reviews are deficient.

The article is here.

Thursday, April 26, 2018

Practical Tips for Ethical Data Sharing

Michelle N. Meyer
Advances in Methods and Practices in Psychological Science
Volume: 1 issue: 1, page(s): 131-144

Abstract

This Tutorial provides practical dos and don’ts for sharing research data in ways that are effective, ethical, and compliant with the federal Common Rule. I first consider best practices for prospectively incorporating data-sharing plans into research, discussing what to say—and what not to say—in consent forms and institutional review board applications, tools for data de-identification and how to think about the risks of re-identification, and what to consider when selecting a data repository. Turning to data that have already been collected, I discuss the ethical and regulatory issues raised by sharing data when the consent form either was silent about data sharing or explicitly promised participants that the data would not be shared. Finally, I discuss ethical issues in sharing “public” data.

The article is here.

Friday, April 20, 2018

Feds: Pitt professor agrees to pay government more than $130K to resolve claims of research grant misdeeds

Sean D. Hamill and Jonathan D. Silver
Pittsburgh Post-Gazette
Originally posted March 21, 2018

Here is an excerpt:

A prolific researcher, Mr. Schunn, pulled in more than $50 million in 24 NSF grants over the past 20 years, as well as another $25 million in 24 other grants from the military and private foundations, most of it researching how people learn, according to his personal web page.

Now, according to the government, Mr. Schunn must “provide certifications and assurances of truthfulness to NSF for up to five years, and agree not to serve as a reviewer, adviser or consultant to NSF for a period of three years.”

But all that may be the least of the fallout from Mr. Schunn’s settlement, according to a fellow researcher who worked on a grant with him in the past.

Though the settlement only involved fraud accusations on four NSF grants from 2006 to 2016, it will bring additional scrutiny to all of his work, not only of the grants themselves, but results, said Joseph Merlino, president of the 21st Century Partnership for STEM Education, a nonprofit based in Conshohocken.

“That’s what I’m thinking: Can I trust the data he gave us?” Mr. Merlino said of a project that he worked on with Mr. Schunn, and for which they just published a research article.

The information is here.

Note: The article refers to Dr. Schunn as Mr. Shunn throughout, even though he has a PhD in Psychology at Carnegie Mellon University.

Friday, December 8, 2017

University could lose millions from “unethical” research backed by Peter Thiel

Beth Mole
ARS Technica
Originally published November 14, 2017

Here is an excerpt:

According to HHS records, SIU (Southern Illinois University) had committed to following all HHS regulations—including safety requirements and having IRB approval and oversight—for all clinical trials, regardless of who funded the trials. If SIU fails to do so, it could jeopardize the $15 million in federal grant money the university receives for its other research.

Earlier, an SIU spokesperson had claimed that SIU didn’t need to follow HHS regulations in this case because Halford was acting as an independent researcher with Rational Vaccines. Thus, SIU had no legal responsibility to ensure proper safety protocols and wasn’t risking its federal funding.

In her e-mail, Buchanan asked for the “results of SIU’s evaluation of its jurisdiction over this research.”

In his response, Kruse noted that SIU was not aware of the St. Kitts trial until October 2016, two months after the trial was completed. But, he wrote, the university had opened an investigation into Halford’s work following his death in June of this year. The decision to investigate was also based on disclosures from American filmmaker Agustín Fernández III, who co-founded Rational Vaccines with Halford, Kruse noted.

The article is here.

Monday, November 6, 2017

Is It Too Late For Big Data Ethics?

Kalev Leetaru
Forbes.com
Originally published October 16, 2017

Here is an excerpt:

AI researchers are rushing to create the first glimmers of general AI and hoping for the key breakthroughs that take us towards a world in which machines gain consciousness. The structure of academic IRBs means that little of this work is subject to ethical review of any kind and its highly technical nature means the general public is little aware of the rapid pace of progress until it comes into direct life-or-death contact with consumers such as driverless cars.

Could industry-backed initiatives like one announced by Bloomberg last month in partnership with BrightHive and Data for Democracy be the answer? It all depends on whether companies and organizations actively infuse these values into the work they perform and sponsor or whether these are merely public relations campaigns for them. As I wrote last month, when I asked the organizers of a recent data mining workshop as to why they did not require ethical review or replication datasets for their submissions, one of the organizers, a Bloomberg data scientist, responded only that the majority of other ACM computer science conferences don’t either. When asked why she and her co-organizers didn’t take a stand with their own workshop to require IRB review and replication datasets even if those other conferences did not, in an attempt to start a trend in the field, she would only repeat that such requirements are not common to their field. When asked whether Bloomberg would be requiring its own data scientists to adhere to its new data ethics initiative and/or mandate that they integrate its principles into external academic workshops they help organize, a company spokesperson said they would try to offer comment, but had nothing further to add after nearly a week.

The article is here.

Friday, October 6, 2017

AI Research Is in Desperate Need of an Ethical Watchdog

Sophia Chen
Wired Science
Originally published September 18, 2017

About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.” Kosinski has received e-mail death threats.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data & Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go.

Right now, if government-funded scientists want to research humans for a study, the law requires them to get the approval of an ethics committee known as an institutional review board, or IRB. Stanford’s review board approved Kosinski and Wang’s study. But these boards use rules developed 40 years ago for protecting people during real-life interactions, such as drawing blood or conducting interviews. “The regulations were designed for a very specific type of research harm and a specific set of research methods that simply don’t hold for data science,” says Metcalf.

The article is here.

Wednesday, September 13, 2017

Peter Thiel sponsors offshore testing of herpes vaccine, sidestepping U.S. safety rules

Marisa Taylor
Kaiser News
Originally posted August 28, 2017

Here is an excerpt:

“What they’re doing is patently unethical,” said Jonathan Zenilman, chief of Johns Hopkins Bayview Medical Center’s Infectious Diseases Division. “There’s a reason why researchers rely on these protections. People can die.”

The risks are real. Experimental trials with live viruses could lead to infection if not handled properly or produce side effects in those already infected. Genital herpes is caused by two viruses that can trigger outbreaks of painful sores. Many patients have no symptoms, though a small number suffer greatly. The virus is primarily spread through sexual contact, but also can be released through skin.

The push behind the vaccine is as much political as medical. President Trump has vowed to speed up the FDA’s approval of some medicines. FDA Commissioner Scott Gottlieb, who had deep financial ties to the pharmaceutical industry, slammed the FDA before his confirmation for over-prioritizing consumer protection to the detriment of medical innovations.

“This is a test case,” said Bartley Madden, a retired Credit Suisse banker and policy adviser to the conservative Heartland Institute, who is another investor in the vaccine. “The FDA is standing in the way, and Americans are going to hear about this and demand action.”

American researchers are increasingly going offshore to developing countries to conduct clinical trials, citing rising domestic costs. But in order to approve the drug for the U.S. market, the FDA requires that clinical trials involving human participants be reviewed and approved by an IRB or an international equivalent. The IRB can reject research based on safety concerns.

The article is here.

Tuesday, June 6, 2017

Some Social Scientists Are Tired of Asking for Permission

Kate Murphy
The New York Times
Originally published May 22, 2017

Who gets to decide whether the experimental protocol — what subjects are asked to do and disclose — is appropriate and ethical? That question has been roiling the academic community since the Department of Health and Human Services’s Office for Human Research Protections revised its rules in January.

The revision exempts from oversight studies involving “benign behavioral interventions.” This was welcome news to economists, psychologists and sociologists who have long complained that they need not receive as much scrutiny as, say, a medical researcher.

The change received little notice until a March opinion article in The Chronicle of Higher Education went viral. The authors of the article, a professor of human development and a professor of psychology, interpreted the revision as a license to conduct research without submitting it for approval by an institutional review board.

That is, social science researchers ought to be able to decide on their own whether or not their studies are harmful to human subjects.

The Federal Policy for the Protection of Human Subjects (known as the Common Rule) was published in 1991 after a long history of exploitation of human subjects in federally funded research — notably, the Tuskegee syphilis study and a series of radiation experiments that took place over three decades after World War II.

The remedial policy mandated that all institutions, academic or otherwise, establish a review board to ensure that federally funded researchers conducted ethical studies.

The article is here.

Saturday, June 11, 2016

Scientists Are Just as Confused About the Ethics of Big-Data Research as You

Sarah Zhang
Wired Magazine
Originally published May 20, 2016

Here is an excerpt:

Shockingly, though, the researchers behind both of those big data blowups never anticipated public outrage. (The OkCupid research does not seem to have gone through any kind of ethical review process, and a Cornell ethics review board approved the Facebook experiment.) And that shows just how untested the ethics of this new field of research is. Unlike medical research, which has been shaped by decades of clinical trials, the risks—and rewards—of analyzing big, semi-public databases are just beginning to become clear.

And the patchwork of review boards responsible for overseeing those risks are only slowly inching into the 21st century. Under the Common Rule in the US, federally funded research has to go through ethical review. Rather than one unified system though, every single university has its own institutional review board, or IRB. Most IRB members are researchers at the university, most often in the biomedical sciences. Few are professional ethicists.

The article is here.

Friday, December 11, 2015

A Controversial Rewrite For Rules To Protect Humans In Experiments

By Rob Stein
NPR Morning Edition
Originally posted November 25, 2015

Throughout history, atrocities have been committed in the name of medical research.

Nazi doctors experimented on concentration camp prisoners. American doctors let poor black men with syphilis go untreated in the Tuskegee study. The list goes on.

To protect people participating in medical research, the federal government decades ago put in place strict rules on the conduct of human experiments.

Now the Department of Health and Human Services is proposing a major revision of these regulations, known collectively as the Common Rule. It's the first change proposed in nearly a quarter-century.

"We're in a very, very different world than when these regulations were first written," says Dr. Jerry Menikoff, who heads the HHS Office of Human Research Protections. "The goal is to modernize the rules to make sure terrible things don't happen."

The article and audio file are here.

Wednesday, October 21, 2015

Informed Consent and Standard of Care: What Must Be Disclosed

Ruth Macklin & Lois Shepherd
The American Journal of Bioethics
Volume 13, Issue 12, 2013

Abstract

The Office for Human Research Protections (OHRP) was correct in determining that the consent forms for the National Institutes of Health (NIH)-sponsored SUPPORT study were seriously flawed. Several articles defended the consent forms and criticized the OHRP's actions. Disagreement focuses on three central issues: (1) how risks and benefits should be described in informed consent documents; (2) the meaning and application of the concept of “standard of care” in the context of research; and (3) the proper role of OHRP. Examination of the consent forms reveals that they failed to disclose the reasonably foreseeable risks of the experimental interventions in the study, as well as the potential for differences in the degree of risk between these interventions. Although the concept of “standard of care” may be helpful in determining the ethical acceptability of other aspects of research, such as clinical equipoise, it is not helpful in discussing consent requirements.

The entire article is here.

Wednesday, August 12, 2015

Conflicts of Interest on Institutional Review Boards Remain Problematic

By Ed Silverman
The Wall Street Journal
Originally posted July 14, 2015

Here is an excerpt:

Well, a new study in JAMA Internal Medicine finds there is “significant progress” among IRB members in reporting and managing conflicts of interest when compared with the results of a similar study conducted in 2005. Still, the study authors, who queried 493 IRB members at 100 medical schools and 15 hospitals that received the most funding from NIH in 2012, say that problems remain.

First, though, here is the good news: There was a drop in the percentage of IRB members with conflicts – 30.4% last year compared with 39% in 2005, although this was not deemed to be a significant change. And those who were willing to report a conflict jumped to 80% from 55%. And 68% of IRB members with a conflict said they would leave the room when a protocol was discussed, compared with 38% in 2005.

The entire story is here.

Tuesday, July 14, 2015

‘Ethical responsibility’ or ‘a whole can of worms’

Differences in opinion on incidental finding review and disclosure in neuroimaging research from focus group discussions with participants, parents, IRB members, investigators, physicians and community members

Caitlin Cole, Linda E Petree, John P Phillips, Jody M Shoemaker, Mark Holdsworth, Deborah L Helitzer
J Med Ethics doi:10.1136/medethics-2014-102552

Abstract
Purpose 
To identify the specific needs, preferences and expectations of the stakeholders impacted by returning neuroimaging incidental findings to research participants.

Methods
Six key stakeholder groups were identified to participate in focus group discussions at our active neuroimaging research facility: Participants, Parents of child participants, Investigators, Institutional Review Board (IRB) Members, Physicians and Community Members. A total of 151 subjects attended these discussions. Transcripts were analysed using principles of Grounded Theory and group consensus coding.

Results 
A series of similar and divergent themes were identified across our subject groups. Similarities included beliefs that it is ethical for researchers to disclose incidental findings as it grants certain health and emotional benefits to participants. All stakeholders also recognised the potential psychological and financial risks to disclosure. Divergent perspectives elucidated consistent differences between our ‘Participant’ subjects (Participants, Parents, Community Members) and our ‘Professional’ subjects (IRB Members, Investigators and Physicians). Key differences included (1) what results should be reported, (2) participants’ autonomous right to research information and (3) the perception of the risk–benefit ratio in managing results.

Conclusions 
Understanding the perceived impact on all stakeholders involved in the process of disclosing incidental findings is necessary to determine appropriate research management policy. Our data further demonstrate the challenge of this task as different stakeholders evaluate the balance between risk and benefit related to their unique positions in this process. These findings offer some of the first qualitative insight into the expectations of the diverse stakeholders affected by incidental finding disclosure.

The entire article is here.

Tuesday, June 9, 2015

Who polices the 'Ethics Police'?

By Robert Klitzman
CNN
Originally posted May 26, 2015

Here is an excerpt:

Most people take for granted is that some protective mechanism -- laws or watchdogs -- ensures that experiments are ethical. Indeed, research ethics committees or institutional review boards (IRBs) do review all human experiments. But they have become increasingly controversial.

Why? In part because they operate behind closed doors and, scientists now argue, often stymy, rather than support key studies.

Investigators commonly call IRBs "the Ethics Police" and complain that these boards unnecessarily block or delay studies. As a researcher, I, too, have sometimes been frustrated by them.

Yet despite the controversy in the field, the public knows little about them, despite how they affect all our lives.

The entire article is here.