Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label research ethics. Show all posts
Showing posts with label research ethics. Show all posts

Thursday, August 2, 2018

Europe’s biggest research fund cracks down on ‘ethics dumping’

Linda Nordling
Nature.com
Originally posted July 3, 2018

Ethics dumping — doing research deemed unethical in a scientist’s home country in a foreign setting with laxer ethical rules — will be rooted out in research funded by the European Union, officials announced last week.

Applications to the EU’s €80-billion (US$93-billion) Horizon 2020 research fund will face fresh levels of scrutiny to make sure that research practices deemed unethical in Europe are not exported to other parts of the world. Wolfgang Burtscher, the European Commission’s deputy director-general for research, made the announcement at the European Parliament in Brussels on 29 June.

Burtscher said that a new code of conduct developed to curb ethics dumping will soon be applied to all EU-funded research projects. That means applicants will be referred to the code when they submit their proposals, and ethics committees will use the document when considering grant applications.

The information is here.

Monday, July 2, 2018

Eugenics never went away

Robert A Wilson
aeon.com
Originally posted June 5, 2018

Here is an excerpt:

Eugenics survivors are those who have lived through eugenic interventions, which typically begin with being categorised as less than fully human – as ‘feeble-minded’, as belonging to a racialised ethnic group assumed to be inferior, or as having a medical condition, such as epilepsy, presumed to be heritable. That categorisation enters them into a eugenics pipeline.

Each such pipeline has a distinctive shape. The Alberta pipeline involved institutionalisation at training schools for the ‘feeble-minded’ or mentally deficient, followed by a recommendation of sterilisation by a medical superintendent, which was then approved by the Eugenics Board, and executed without consent. Alberta’s introduction of guidance clinics also allowed eugenic sterilisation to reach into the non-institutionalised population, particularly schools.

What roles have the stories of eugenics survivors played in understanding eugenics? For the most part and until recently, these first-person narratives have been absent from the historical study of eugenics. On its traditional view, according to which eugenics ended around 1945, this is entirely understandable. The number of survivors dwindles over time, and those who survived often chose, as did many in Alberta, to bracket off rather than re-live their past. Yet the limited presence of survivor narratives in the study of eugenics also stems from a corresponding limit in the safe and receptive audience for those narratives.

Thursday, June 28, 2018

Are Most Clinical Trials Unethical?

Michel Shamy
American Council on Science and Health
Originally published May 21, 2018

Here is an excerpt:

Therefore, to render RCTs scientifically and ethically justifiable, certain conditions must be met. But what are they?

Much of the recent literature on the topic of RCT ethics references the concept of “equipoise,” which refers to uncertainty or disagreement in the medical community. Though it is widely cited, “equipoise” has been defined inconsistently, is not universally accepted, and can be difficult to operationalize. Most scientists agree that we should not do another study when the answer is known ahead of time; to do so would be redundant, wasteful, and ultimately harmful to patients. When some estimates suggest that as much as 85% of clinical research may be wasteful, there is a strong imperative to develop clear criteria for when RCTs are necessary. In the absence of such criteria, RCTs that are unnecessary may be allowed to proceed – and unnecessary RCTs are, by definition, unethical.

We have proposed a preliminary set of criteria to guide judgments about whether a proposed RCT is scientifically justified. Every RCT should (1) ask a clear question, (2) assert a specific hypothesis, and (3) ensure that the hypothesis has not already been answered by available knowledge, including non-randomized studies. Then, we examined a sample of high quality, published RCTs and found that only 44% met these criteria.

The information is here.

Thursday, June 7, 2018

Protecting confidentiality in genomic studies


MIT Press Release
Originally released May 7, 2018

Genome-wide association studies, which look for links between particular genetic variants and incidence of disease, are the basis of much modern biomedical research.

But databases of genomic information pose privacy risks. From people’s raw genomic data, it may be possible to infer their surnames and perhaps even the shapes of their faces. Many people are reluctant to contribute their genomic data to biomedical research projects, and an organization hosting a large repository of genomic data might conduct a months-long review before deciding whether to grant a researcher’s request for access.

In a paper published in Nature Biotechnology (https://doi.org/10.1038/nbt.4108), researchers from MIT and Stanford University present a new system for protecting the privacy of people who contribute their genomic data to large-scale biomedical studies. Where earlier cryptographic methods were so computationally intensive that they became prohibitively time consuming for more than a few thousand genomes, the new system promises efficient privacy protection for studies conducted over as many as a million genomes.

The release is here.

Friday, June 1, 2018

The toxic legacy of Canada's CIA brainwashing experiments

Ashifa Kassam
The Guardian
Originally published May 3, 2018

Here is an excerpt:

Patients were subjected to high-voltage electroshock therapy several times a day, forced into drug-induced sleeps that could last months and injected with megadoses of LSD.

After reducing them to a childlike state – at times stripping them of basic skills such as how to dress themselves or tie their shoes – Cameron would attempt to reprogram them by bombarding them with recorded messages for up to 16 hours at a time. First came negative messages about their inadequacies, followed by positive ones, in some cases repeated up to half a million times.

“He couldn’t get his patients to listen to them enough so he put speakers in football helmets and locked them on their heads,” said Johnson. “They were going crazy banging their heads into walls, so he then figured he could put them in a drug induced coma and play the tapes as long as he needed.”

Along with intensive bouts of electroshock therapy, Johnson’s grandmother was given injections of LSD on 14 occasions. “She said that made her feel like her bones were melting. She would say: ‘I don’t want these,’” said Johnson. “And the doctors and nurses would say to her: ‘You’re a bad wife, you’re a bad mother. If you wanted to get better, you would do this for your family. Think about your daughter.’”

The information is here.

Friday, May 25, 2018

The $3-Million Research Breakdown

Jodi Cohen
www.propublica.org
Originally published April 26, 2018

Here is an excerpt:

In December, the university quietly paid a severe penalty for Pavuluri’s misconduct and its own lax oversight, after the National Institute of Mental Health demanded weeks earlier that the public institution — which has struggled with declining state funding — repay all $3.1 million it had received for Pavuluri’s study.

In issuing the rare rebuke, federal officials concluded that Pavuluri’s “serious and continuing noncompliance” with rules to protect human subjects violated the terms of the grant. NIMH said she had “increased risk to the study subjects” and made any outcomes scientifically meaningless, according to documents obtained by ProPublica Illinois.

Pavuluri’s research is also under investigation by two offices in the U.S. Department of Health and Human Services: the inspector general’s office, which examines waste, fraud and abuse in government programs, according to subpoenas obtained by ProPublica Illinois, and the Office of Research Integrity, according to university officials.

The article is here.

Wednesday, May 23, 2018

Growing brains in labs: why it's time for an ethical debate

Ian Sample
The Guardian
Originally published April 24, 2018

Here is an excerpt:

The call for debate has been prompted by a raft of studies in which scientists have made “brain organoids”, or lumps of human brain from stem cells; grown bits of human brain in rodents; and kept slivers of human brain alive for weeks after surgeons have removed the tissue from patients. Though it does not indicate consciousness, in one case, scientists recorded a surge of electrical activity from a ball of brain and retinal cells when they shined a light on it.

The research is driven by a need to understand how the brain works and how it fails in neurological disorders and mental illness. Brain organoids have already been used to study autism spectrum disorders, schizophrenia and the unusually small brain size seen in some babies infected with Zika virus in the womb.

“This research is essential to alleviate human suffering. It would be unethical to halt the work,” said Nita Farahany, professor of law and philosophy at Duke University in North Carolina. “What we want is a discussion about how to enable responsible progress in the field.”

The article is here.

Saturday, May 12, 2018

Bystander risk, social value, and ethics of human research

S. K. Shah, J. Kimmelman, A. D. Lyerly, H. F. Lynch, and others
Science 13 Apr 2018 : 158-159

Two critical, recurring questions can arise in many areas of research with human subjects but are poorly addressed in much existing research regulation and ethics oversight: How should research risks to “bystanders” be addressed? And how should research be evaluated when risks are substantial but not offset by direct benefit to participants, and the benefit to society (“social value”) is context-dependent? We encountered these issues while serving on a multidisciplinary, independent expert panel charged with addressing whether human challenge trials (HCTs) in which healthy volunteers would be deliberately infected with Zika virus could be ethically justified (1). Based on our experience on that panel, which concluded that there was insufficient value to justify a Zika HCT at the time of our report, we propose a new review mechanism to preemptively address issues of bystander risk and contingent social value.

(cut)

Some may object that generalizing and institutionalizing this approach could slow valuable research by adding an additional layer for review. However, embedding this process within funding agencies could preempt ethical problems that might otherwise stymie research. Concerns that CERCs might suffer from “mission creep” could be countered by establishing clear charters and triggers for deploying CERCs. Unlike IRBs, their opinions should be publicly available to provide precedent for future research programs or for IRBs evaluating particular protocols at a later date.

The information is here.

Thursday, April 26, 2018

Practical Tips for Ethical Data Sharing

Michelle N. Meyer
Advances in Methods and Practices in Psychological Science
Volume: 1 issue: 1, page(s): 131-144

Abstract

This Tutorial provides practical dos and don’ts for sharing research data in ways that are effective, ethical, and compliant with the federal Common Rule. I first consider best practices for prospectively incorporating data-sharing plans into research, discussing what to say—and what not to say—in consent forms and institutional review board applications, tools for data de-identification and how to think about the risks of re-identification, and what to consider when selecting a data repository. Turning to data that have already been collected, I discuss the ethical and regulatory issues raised by sharing data when the consent form either was silent about data sharing or explicitly promised participants that the data would not be shared. Finally, I discuss ethical issues in sharing “public” data.

The article is here.

Friday, April 20, 2018

Feds: Pitt professor agrees to pay government more than $130K to resolve claims of research grant misdeeds

Sean D. Hamill and Jonathan D. Silver
Pittsburgh Post-Gazette
Originally posted March 21, 2018

Here is an excerpt:

A prolific researcher, Mr. Schunn, pulled in more than $50 million in 24 NSF grants over the past 20 years, as well as another $25 million in 24 other grants from the military and private foundations, most of it researching how people learn, according to his personal web page.

Now, according to the government, Mr. Schunn must “provide certifications and assurances of truthfulness to NSF for up to five years, and agree not to serve as a reviewer, adviser or consultant to NSF for a period of three years.”

But all that may be the least of the fallout from Mr. Schunn’s settlement, according to a fellow researcher who worked on a grant with him in the past.

Though the settlement only involved fraud accusations on four NSF grants from 2006 to 2016, it will bring additional scrutiny to all of his work, not only of the grants themselves, but results, said Joseph Merlino, president of the 21st Century Partnership for STEM Education, a nonprofit based in Conshohocken.

“That’s what I’m thinking: Can I trust the data he gave us?” Mr. Merlino said of a project that he worked on with Mr. Schunn, and for which they just published a research article.

The information is here.

Note: The article refers to Dr. Schunn as Mr. Shunn throughout, even though he has a PhD in Psychology at Carnegie Mellon University.

Wednesday, February 28, 2018

Can scientists agree on a code of ethics?

David Ryan Polgar
BigThink.com
Originally published January 30, 2018

Here is an excerpt:

Regarding the motivation for developing this Code of Ethics, Hug mentioned the threat of reduced credibility of research if the standards seem to loose. She mentioned the pressure that many young scientists face in being prolific with research, insinuating the tension with quantity versus quality. "We want research to remain credible because we want it to have an impact on policymakers, research being turned into action." One of the goals of Hug presenting about the Code of Ethics, she said, was to start having various research institutions endorse the document, and have those institutions start distributing the Code of Ethics within their network.

“All these goals will conflict with each other," said Jodi Halpern, referring to the issues that may get in the way of adopting a code of ethics for scientists. "People need rigorous education in ethical reasoning, which is just as rigorous as science education...what I’d rather have as a requirement, if I’d like to put teeth anywhere. I’d like to have every doctoral student not just have one of those superficial IRB fake compliance courses, but I’d like to have them have to pass a rigorous exam showing how they would deal with certain ethical dilemmas. And everybody who will be the head of a lab someday will have really learned how to do that type of thinking.”

The article is here.

Thursday, February 22, 2018

NIH adopts new rules on human research, worrying behavioral scientists

William Wan
The Washington Post
Originally posted January 24, 2018

Last year, the National Institutes of Health announced plans to tighten its rules for all research involving humans — including new requirements for scientists studying human behavior — and touched off a panic.

Some of the country’s biggest scientific associations, including the American Psychological Association and Federation of Associations in Behavioral and Brain Sciences, penned impassioned letters over the summer warning that the new policies could slow scientific progress, increase red tape and present obstacles for researchers working in smaller labs with less financial and administrative resources to deal with the added requirements. More than 3,500 scientists signed an open letter to NIH director Francis Collins.

The new rules are scheduled to take effect Thursday. They will have a big impact on how research is conducted, especially in fields like psychology and neuroscience. NIH distributes more than $32 billion each year, making it the largest public funder of biomedical and health research in the world, and the rules apply to any NIH-supported work that studies human subjects and is evaluating the effects of interventions on health or behavior.

The article is here.

Thursday, February 1, 2018

Ethics for healthcare data is obsessed with risk – not public benefits

Tim Spector and Barbara Prainsack
The Conversation
Originally published January 5, 2018

Here is an excerpt:

Health researchers working with human participants – or their identifiable information – need to jump through lots of ethical and bureaucratic hoops. The underlying rationale is that health research poses particularly high risks to people, and that these risks need to be minimised. But does the same rationale apply to non-invasive research using digital health data? Setting aside physically invasive research, which absolutely should maintain the most stringent of safeguards, is data-based health research really riskier than other research that analyses people's information?

Many corporations can use data from their customers for a wide range of purposes without needing research ethics approval, because their users have already "agreed" to this (by ticking a box), or the activity itself isn't qualified as health research. But is the assumptions that it is less risky justified?

Facebook and Google hold voluminous and fine-grained datasets on people. They analyse pictures and text posted by users. But they also study behavioural information, such as whether or not users "like" something or support political causes. They do this to profile users and discern new patterns connecting previously unconnected traits and behaviours. These findings are used for marketing; but they also contribute to knowledge about human behaviour.

The information is here.

Tuesday, January 16, 2018

3D Printed Biomimetic Blood Brain Barrier Eliminates Need for Animal Testing

Hannah Rose Mendoza
3Dprint.com
Originally published December 21, 2017

The blood-brain barrier (BBB) may sound like a rating system for avoiding horror movies, but in reality it is a semi-permeable membrane responsible for restricting and regulating the entry of neurotoxic compounds, diseases, and circulating blood into the brain. It exists as a defense mechanism to protect the brain from direct contact with damaging entities carried in the body. Normally, this is something that is important to maintain as a strong defense; however, there are times when medical treatments require the ability to trespass beyond this biological barrier without damaging it. This is especially true now in the era of nanomedicine, when therapeutic treatments have been developed to combat brain cancer, neurodegenerative diseases, and even the effects of trauma-based brain damage.

In order to advance medical research in these important areas, it has been important to operate in an environment that accurately represents the BBB. As such, researchers have turned to animal subjects, something which comes with significant ethical and moral questions.

The story is here.

Friday, December 8, 2017

University could lose millions from “unethical” research backed by Peter Thiel

Beth Mole
ARS Technica
Originally published November 14, 2017

Here is an excerpt:

According to HHS records, SIU (Southern Illinois University) had committed to following all HHS regulations—including safety requirements and having IRB approval and oversight—for all clinical trials, regardless of who funded the trials. If SIU fails to do so, it could jeopardize the $15 million in federal grant money the university receives for its other research.

Earlier, an SIU spokesperson had claimed that SIU didn’t need to follow HHS regulations in this case because Halford was acting as an independent researcher with Rational Vaccines. Thus, SIU had no legal responsibility to ensure proper safety protocols and wasn’t risking its federal funding.

In her e-mail, Buchanan asked for the “results of SIU’s evaluation of its jurisdiction over this research.”

In his response, Kruse noted that SIU was not aware of the St. Kitts trial until October 2016, two months after the trial was completed. But, he wrote, the university had opened an investigation into Halford’s work following his death in June of this year. The decision to investigate was also based on disclosures from American filmmaker Agustín Fernández III, who co-founded Rational Vaccines with Halford, Kruse noted.

The article is here.

Thursday, December 7, 2017

Attica: It’s Worse Than We Thought

Heather Ann Thompson
The New York Times
Originally posted November 19, 2017

Here is an excerpt:

As the fine print of that 1972 article read: “We are indebted to the inmates of the Attica Correctional Facility who participated in this study and to the warden and his administration for their help and cooperation.” This esteemed physician, a man working for two of New York’s most respected hospitals and receiving generous research funding from the N.I.H., was indeed conducting leprosy experiments at Attica.

But which of Attica’s nearly 2,400 prisoners, I wondered, was the subject of experiments relating to this crippling disease, without, as Dr. Brandriss admitted, adequate consent? Might it have been the 19-year-old who was at Attica because he had sliced the top of a neighbor’s convertible? Or a man imprisoned there for more serious offenses? Either way, no jury had sentenced them to being a guinea pig in any experiment relating to a disease as painful and disfiguring as leprosy.

And what about the hundreds of corrections officers and civilian employees working at Attica? Even if no one in this extremely crowded facility was actually exposed to this dreaded disease, one in which “prolonged close contact” with an infected patient is a most serious risk factor, were these state employees at all informed that medical experiments being conducted on the men in their charge?

This is not the first time prisons have allowed secret medical experiments on those locked inside. A 1998 book on Holmesburg Prison in Pennsylvania revealed that a doctor there, Albert Kligman, had been experimenting on prisoners for years. After the book appeared, nearly 300 former prisoners sued him, the University of Pennsylvania and the manufacturers of the substances to which they had been exposed, but none of the defendants was held accountable.

The article is here.

Saturday, December 2, 2017

Japanese doctor who exposed a drug too good to be true calls for morality and reforms

Tomoko Otake
Japan Times
Originally posted November 15, 2017

Here is an excerpt:

Kuwajima says the Diovan case is a sobering reminder that large-scale clinical trials published in top medical journals should not be blindly trusted, as they can be exploited by drugmakers rushing to sell their products before their patents run out.

“I worked at a research hospital and had opportunities to try new or premarket drugs on patients, so I knew from early on that Diovan and the same class of drugs called ARB wouldn’t work, especially for elderly patients,” Kuwajima recalled in a recent interview at Tokyo Metropolitan Geriatric Hospital, where he has retired from full-time practice but still sees patients two days a week. “I had a strong sense of crisis that hordes of elderly people — whose ranks were growing as the population grayed — would be prescribed a drug that didn’t work.”

Kuwajima said he immediately found the Diovan research suspicious because the results were just too good to be true. This was before Novartis admitted that it had paid five professors conducting studies at their universities a total of ¥1.1 billion in “research grants,” and even had Shirahashi, a Novartis employee purporting to be a university lecturer, help with statistical analyses for the papers.

The article is here.

Tuesday, October 17, 2017

Is it Ethical for Scientists to Create Nonhuman Primates with Brain Disorders?

Carolyn P. Neuhaus
The Hastings Center
Originally published on September 25, 2017

Here is an excerpt:

Such is the rationale for creating primate models: the brain disorders under investigation cannot be accurately modelled in other nonhuman organisms, because of differences in genetics, brain structure, and behaviors. But research involving humans with brain disorders is also morally fraught. Some people with brain disorders experience impairments to decision-making capacity as a component or symptom of disease, and therefore are unable to provide truly informed consent to research participation. Some of the research is too invasive, and would be grossly unethical to carry out with human subjects. So, nonhuman primates, and macaques in particular, occupy a “sweet spot.” Their genetic code and brain structure are sufficiently similar to humans’ so as to provide a valid and accurate model of human brain disorders. But, they are not conferred protections from research that apply to humans and to some non-human primates, notably chimpanzees and great apes. In the United States, for example, chimpanzees are protected from invasive research, but other primates are not. Some have suggested, including in a recent article in Journal of Medical Ethics, that protections like those afforded to chimpanzees ought to be extended to other primates and other animals, such as dogs, as evidence mounts that they also have complex cognitive, social, and emotional lives. For now, macaques and other primates remain in use.

Prior to the discovery of genome editing tools like ZFNs, TALENs, and most recently, CRISPR, it was extremely challenging, almost to the point of prohibitive, to create non-human primates with precise, heritable genome modifications. But CRISPR (Clustered Randomized Interspersed Palindromic Repeat) presents a technological advance that brings genome engineering of non-human primates well within reach.

The article is here.

Friday, October 6, 2017

AI Research Is in Desperate Need of an Ethical Watchdog

Sophia Chen
Wired Science
Originally published September 18, 2017

About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.” Kosinski has received e-mail death threats.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data & Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go.

Right now, if government-funded scientists want to research humans for a study, the law requires them to get the approval of an ethics committee known as an institutional review board, or IRB. Stanford’s review board approved Kosinski and Wang’s study. But these boards use rules developed 40 years ago for protecting people during real-life interactions, such as drawing blood or conducting interviews. “The regulations were designed for a very specific type of research harm and a specific set of research methods that simply don’t hold for data science,” says Metcalf.

The article is here.

Tuesday, September 26, 2017

Drug company faked cancer patients to sell drug

Aaron M. Kessler
CNN.com
Originally published September 6, 2017

When Insys Therapeutics got approval to sell an ultra-powerful opioid for cancer patients with acute pain in 2012, it soon discovered a problem: finding enough cancer patients to use the drug.

To boost sales, the company allegedly took patients who didn't have cancer and made it look like they did.

The drug maker used a combination of tactics, such as falsifying medical records, misleading insurance companies and providing kickbacks to doctors in league with the company, according to a federal indictment and ongoing congressional investigation by Sen. Claire McCaskill, a Democrat from Missouri.

The new report by McCaskill's office released Wednesday includes allegations about just how far the company went to push prescriptions of its sprayable form of fentanyl, Subsys.

Because of the high cost associated with Subsys, most insurers wouldn't pay for it unless it was approved in advance. That process, likely familiar to anyone who's taken an expensive medication, is called "prior-authorization."

The article is here.