Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label research ethics. Show all posts
Showing posts with label research ethics. Show all posts

Sunday, August 13, 2017

Ethical and legal considerations in psychobiography

Jason D Reynolds and Taewon Choi
American Psychologist 2017 Jul-Aug;72(5):446-458

Abstract

Despite psychobiography's long-standing history in the field of psychology, there has been relatively little discussion of ethical issues and guidelines in psychobiographical research. The Ethics Code of the American Psychological Association (APA) does not address psychobiography. The present article highlights the value of psychobiography to psychology, reviews the history and current status of psychobiography in the field, examines the relevance of existing APA General Principles and Ethical Standards to psychobiographical research, and introduces a best practice ethical decision-making model to assist psychologists working in psychobiography. Given the potential impact of psychologists' evaluative judgments on other professionals and the lay public, it is emphasized that psychologists and other mental health professionals have a high standard of ethical vigilance in conducting and reporting psychobiography.

The article is here.

Thursday, August 10, 2017

Predatory Journals Hit By ‘Star Wars’ Sting

By Neuroskeptic
discovermagazine.com
Originally published July 19, 2017

A number of so-called scientific journals have accepted a Star Wars-themed spoof paper. The manuscript is an absurd mess of factual errors, plagiarism and movie quotes. I know because I wrote it.

Inspired by previous publishing “stings”, I wanted to test whether ‘predatory‘ journals would publish an obviously absurd paper. So I created a spoof manuscript about “midi-chlorians” – the fictional entities which live inside cells and give Jedi their powers in Star Wars. I filled it with other references to the galaxy far, far away, and submitted it to nine journals under the names of Dr Lucas McGeorge and Dr Annette Kin.

Four journals fell for the sting. The American Journal of Medical and Biological Research (SciEP) accepted the paper, but asked for a $360 fee, which I didn’t pay. Amazingly, three other journals not only accepted but actually published the spoof. Here’s the paper from the International Journal of Molecular Biology: Open Access (MedCrave), Austin Journal of Pharmacology and Therapeutics (Austin) and American Research Journal of Biosciences (ARJ) I hadn’t expected this, as all those journals charge publication fees, but I never paid them a penny.

The blog post is here.

Tuesday, July 18, 2017

Responding to whistleblower’s claims, Duke admits research data falsification

Ray Gronberg
The Herald-Sun
Originally published July 2, 2017

In-house investigators at Duke University believe a former lab tech falsified or fabricated data that went into 29 medical research reports, lawyers for the university say in their answer to a federal whistleblower lawsuit against it.

Duke’s admissions concern the work of Erin Potts-Kant, and a probe it began in 2013 when she was implicated in an otherwise-unrelated embezzlement. The lawsuit, from former lab analyst Joseph Thomas, contends Duke and some of its professors used the phony data to fraudulently obtain federal research grants. He also alleges they ignored warning signs about Potts-Kants’ work, and tried to cover up the fraud.

The university’s lawyers have tried to get the case dismissed, but in April, a federal judge said it can go ahead. The latest filings thus represent Duke’s first answer to the substance of Thomas’ allegations.

Up front, it said Potts-Kant told a Duke investigating committee that she’d faked data that wound up being “included in various publications and grant applications.”

The article is here.

Friday, June 16, 2017

Do You Want to Be a Cyborg?

Agata Sagan and Peter Singer
Project Syndicate
Originally posted May 17, 2017

Here is an excerpt:

In the United States, Europe, and most other countries with advanced biomedical research, strict regulations on the use of human subjects would make it extremely difficult to get permission to carry out experiments aimed at enhancing our cognitive abilities by linking our brains to computers. US regulations drove Phil Kennedy, a pioneer in the use of computers to enable paralyzed patients to communicate by thought alone, to have electrodes implanted in his own brain in order to make further scientific progress. Even then, he had to go to Belize, in Central America, to find a surgeon willing to perform the operation. In the United Kingdom, cyborg advocate Kevin Warwick and his wife had data arrays implanted in their arms to show that direct communication between the nervous systems of separate human beings is possible.

Musk has suggested that the regulations governing the use of human subjects in research could change. That may take some time. Meanwhile freewheeling enthusiasts are going ahead anyway. Tim Cannon doesn’t have the scientific or medical qualifications of Phil Kennedy or Kevin Warwick, but that hasn’t stopped him from co-founding a Pittsburgh company that implants bionic devices, often after he has first tried them out on himself. His attitude is, as he said at an event billed as “The world’s first cyborg-fair,” held in Düsseldorf in 2015, “Let’s just do it and really go for it.”

People at the Düsseldorf cyborg-fair had magnets, radio frequency identification chips, and other devices implanted in their fingers or arms. The surgery is often carried out by tattooists and sometimes veterinarians, because qualified physicians and surgeons are reluctant to operate on healthy people.

The article is here.

Tuesday, June 6, 2017

Some Social Scientists Are Tired of Asking for Permission

Kate Murphy
The New York Times
Originally published May 22, 2017

Who gets to decide whether the experimental protocol — what subjects are asked to do and disclose — is appropriate and ethical? That question has been roiling the academic community since the Department of Health and Human Services’s Office for Human Research Protections revised its rules in January.

The revision exempts from oversight studies involving “benign behavioral interventions.” This was welcome news to economists, psychologists and sociologists who have long complained that they need not receive as much scrutiny as, say, a medical researcher.

The change received little notice until a March opinion article in The Chronicle of Higher Education went viral. The authors of the article, a professor of human development and a professor of psychology, interpreted the revision as a license to conduct research without submitting it for approval by an institutional review board.

That is, social science researchers ought to be able to decide on their own whether or not their studies are harmful to human subjects.

The Federal Policy for the Protection of Human Subjects (known as the Common Rule) was published in 1991 after a long history of exploitation of human subjects in federally funded research — notably, the Tuskegee syphilis study and a series of radiation experiments that took place over three decades after World War II.

The remedial policy mandated that all institutions, academic or otherwise, establish a review board to ensure that federally funded researchers conducted ethical studies.

The article is here.

Thursday, April 6, 2017

Would You Deliver an Electric Shock in 2015?

Dariusz Doliński, Tomasz Grzyb, Tomasz Grzyb and others
Social Psychological and Personality Science
First Published January 1, 2017

Abstract

In spite of the over 50 years which have passed since the original experiments conducted by Stanley Milgram on obedience, these experiments are still considered a turning point in our thinking about the role of the situation in human behavior. While ethical considerations prevent a full replication of the experiments from being prepared, a certain picture of the level of obedience of participants can be drawn using the procedure proposed by Burger. In our experiment, we have expanded it by controlling for the sex of participants and of the learner. The results achieved show a level of participants’ obedience toward instructions similarly high to that of the original Milgram studies. Results regarding the influence of the sex of participants and of the “learner,” as well as of personality characteristics, do not allow us to unequivocally accept or reject the hypotheses offered.

The article is here.

“After 50 years, it appears nothing has changed,” said social psychologist Tomasz Grzyb, an author of the new study, which appeared this week in the journal Social Psychological and Personality Science.

A Los Angeles Times article summaries the study here.

Sunday, March 19, 2017

Revamping the US Federal Common Rule: Modernizing Human Participant Research Regulations

James G. Hodge Jr. and Lawrence O. Gostin
JAMA. Published online February 22, 2017

On January 19, 2017, the Office for Human Research Protections (OHRP), Department of Health and Human Services, and 15 federal agencies published a final rule to modernize the Federal Policy for the Protection of Human Subjects (known as the “Common Rule”).1 Initially introduced more than a quarter century ago, the Common Rule predated modern scientific methods and findings, notably human genome research.

Research enterprises now encompass vast multicenter trials in both academia and the private sector. The volume, types, and availability of public/private data and biospecimens have increased exponentially. Federal agencies demanded more accountability, research investigators sought more flexibility, and human participants desired more control over research. Most rule changes become effective in 2018, giving institutions time for implementation.

The article is here.

Saturday, March 11, 2017

The Moral and Legal Permissibility of Placebo-Controlled Trials

Mina Henaen
Princeton Journal of Bioethics
Princeton University
Originally posted August 15, 2016

Leaders of research ethics organizations have made placebo-controlled trials illegal whenever placebo groups would not receive currently existing treatment for their ailment, slowing down research for cheaper and more effective treatments. In this essay, I argue that placebo-controlled trials (PCTs) are both morally and legally permissible whenever they provide care that is better than the local standard of care. Contrary to what the anti-PCT often put forth, I argue that researchers conducting PCTs are not exploiting other developing nations, or subjects from these nations, when they conduct their research there. I then show how these researchers are also not especially legally required to provide treatment to their placebo-group subjects. I present some of the benefits of such research to the placebo groups as well and consider the moral impermissibility of making such research illegal.

The article is here.

Saturday, February 18, 2017

A Crime in the Cancer Lab

Theodora Ross
The New York Times
Originally published January 28, 2017

Here is an excerpt:

We have all read about incidents of scientific misconduct; in recent years, a number of manuscripts based on fake research have been retracted. But they usually involved scientists who cut corners or fabricated data, not deliberate sabotage. The poisoned flasks were a first for me. Falsified data is a crime against scientific truth. This was personal.

I turned to my colleagues to ask how to respond, and to my surprise, they all said the same thing: my student, Heather Ames, was probably sabotaging herself.

Their reasoning? She wanted an excuse for why things weren't working in her experiments. Competition and the pressure to get results quickly is ever-present in the world of biomedical research, so it's not out of the question that a young scientist might succumb to the stress.

The article is here.

Monday, December 26, 2016

Reframing Research Ethics: Towards a Professional Ethics for the Social Sciences

Nathan Emmerich
Sociological Research Online, 21 (4), 7
DOI: 10.5153/sro.4127

Abstract

This article is premised on the idea that were we able to articulate a positive vision of the social scientist's professional ethics, this would enable us to reframe social science research ethics as something internal to the profession. As such, rather than suffering under the imperialism of a research ethics constructed for the purposes of governing biomedical research, social scientists might argue for ethical self-regulation with greater force. I seek to provide the requisite basis for such an 'ethics' by, first, suggesting that the conditions which gave rise to biomedical research ethics are not replicated within the social sciences. Second, I argue that social science research can be considered as the moral equivalent of the 'true professions.' Not only does it have an ultimate end, but it is one that is – or, at least, should be – shared by the state and society as a whole. I then present a reading of confidentiality as a methodological – and not simply ethical – aspect of research, one that offers further support for the view that social scientists should attend to their professional ethics and the internal standards of their disciplines, rather than the contemporary discourse of research ethics that is rooted in the bioethical literature. Finally, and by way of a conclusion, I consider the consequences of the idea that social scientists should adopt a professional ethics and propose that the Clinical Ethics Committee might provide an alternative model for the governance of social science research.

The article is here.

Sunday, October 30, 2016

The ethics of animal research: a survey of the public and scientists in North America

Ari R. Joffe, Meredith Bara, Natalie Anton and Nathan Nobis
BMC Medical Ethics
BMC series – open, inclusive and trusted 2016

Background

To determine whether the public and scientists consider common arguments (and counterarguments) in support (or not) of animal research (AR) convincing.

Methods

After validation, the survey was sent to samples of public (Sampling Survey International (SSI; Canadian), Amazon Mechanical Turk (AMT; US), a Canadian city festival and children’s hospital), medical students (two second-year classes), and scientists (corresponding authors, and academic pediatricians). We presented questions about common arguments (with their counterarguments) to justify the moral permissibility (or not) of AR. Responses were compared using Chi-square with Bonferonni correction.

Results

There were 1220 public [SSI, n = 586; AMT, n = 439; Festival, n = 195; Hospital n = 107], 194/331 (59 %) medical student, and 19/319 (6 %) scientist [too few to report] responses. Most public respondents were <45 years (65 %), had some College/University education (83 %), and had never done AR (92 %). Most public and medical student respondents considered ‘benefits arguments’ sufficient to justify AR; however, most acknowledged that counterarguments suggesting alternative research methods may be available, or that it is unclear why the same ‘benefits arguments’ do not apply to using humans in research, significantly weakened ‘benefits arguments’. Almost all were not convinced of the moral permissibility of AR by ‘characteristics of non-human-animals arguments’, including that non-human-animals are not sentient, or are property. Most were not convinced of the moral permissibility of AR by ‘human exceptionalism’ arguments, including that humans have more advanced mental abilities, are of a special ‘kind’, can enter social contracts, or face a ‘lifeboat situation’. Counterarguments explained much of this, including that not all humans have these more advanced abilities [‘argument from species overlap’], and that the notion of ‘kind’ is arbitrary [e.g., why are we not of the ‘kind’ ‘sentient-animal’ or ‘subject-of-a-life’?]. Medical students were more supportive (80 %) of AR at the end of the survey (p < 0.05).

Conclusions

Responses suggest that support for AR may not be based on cogent philosophical rationales, and more open debate is warranted.

Tuesday, September 27, 2016

Workshop on ethics of monkey research earns cheers and boos

By David Grimm
Science Insider
Originally posted September 8, 2016

Here is an excerpt:

Jeffrey Kahn, a bioethicist at Johns Hopkins University in Baltimore, Maryland, and the chair of the chimpanzee report, added that nonhuman primate research should only be conducted if it has to be conducted. “It’s not ethically acceptable to do research that is not necessary. Being ‘necessary’ is not the same as ‘worth doing.’”

That led to a debate about just what constituted “necessary” and “moral justification.” Even research that doesn’t have an immediate translation to people—like figuring out how the monkey brain works—is necessary, argued Newsome, because it could eventually lead to significant new knowledge that might improve human health. “It will be a tragedy for the world if we don’t leave room for basic science.” Most attendees seemed to agree, with some stating that not doing research on monkeys was ethically indefensible because humans would suffer down the line.

Despite that ethical debate, animal welfare groups said they were upset that science—not welfare—dominated the workshop. Of the 13 speakers, eight make their living working with nonhuman primates. The workshop also only devoted 2 minutes—instead of its scheduled 30 minutes—to public comments. “We are extremely disappointed that no animal protection groups were invited,” wrote Kathleen Conlee, vice president of animal research issues for The Humane Society of the United States in Washington, D.C., in an email to ScienceInsider. “It is clear that NIH has not followed through on what Congress requested, which was to examine ethical policies and processes.”

The article is here.

Friday, September 23, 2016

Stop ignoring misconduct

Donald S. Kornfeld & Sandra L. Titus
Nature
Originally posted 31 August 2016

The history of science shows that irreproducibility is not a product of our times. Some 350 years ago, the chemist Robert Boyle penned essays on “the unsuccessfulness of experiments”. He warned readers to be sceptical of reported work. “You will meet with several Observations and Experiments, which ... may upon further tryal disappoint your expectation.” He attributed the problem to a 'lack of skill in the scientist and the lack of purity of the ingredients', and what would today be referred to as inadequate statistical power.

By 1830, polymath Charles Babbage was writing in more cynical terms. In Reflections on the Decline of Science in England, he complains of “several species of impositions that have been practised in science”, namely “hoaxing, forging, trimming and cooking”.

In other words, irreproducibility is the product of two factors: faulty research practices and fraud.

The article is here.

Monday, August 1, 2016

Panel slams plan for human research rules

by David Malakoff
Science  08 Jul 2016:
Vol. 353, Issue 6295, pp. 106-107
DOI: 10.1126/science.353.6295.106

In a surprise development certain to fuel a long-running controversy, a prominent science advisory panel is calling on the U.S. government to abandon a nearly finished update to rules on protecting human research participants. It should wait for a new high-level commission, created by Congress and the president, to recommend improvements and then start over, the panel says.

Policy insiders say the recommendation, made 29 June by a committee of the National Academies of Sciences, Engineering, and Medicine that is examining ways to reduce the regulatory burden on academic scientists, is the political equivalent of a comic book hero trying to step in front of a speeding train in a bid to prevent a wreck.

It's not clear, however, whether the panel will succeed in stopping the regulatory express--or just get run over. Both the Obama administration, which has been pushing to complete the new rules this year, and lawmakers in Congress would need to back the halt--and so far they've been silent.

Still, many researchers and university groups are thrilled with the panel's recommendation, noting that they have repeatedly objected to some of the proposed rule changes as unworkable, but with little apparent impact.

The article is here.

Wednesday, July 27, 2016

Research fraud: the temptation to lie – and the challenges of regulation

Ian Freckelton
The Conversation
Originally published July 5, 2016

Most scientists and medical researchers behave ethically. However, in recent years, the number of high-profile scandals in which researchers have been exposed as having falsified their data raises the issue of how we should deal with research fraud.

There is little scholarship on this subject that crosses disciplines and engages with the broader phenomenon of unethical behaviour within the domain of research.

This is partly because disciplines tend to operate in their silos and because universities, in which researchers are often employed, tend to minimise adverse publicity.

When scandals erupt, embarrassment in a particular field is experienced for a short while – and researchers may leave their university. But few articles are published in scholarly journals about how the research fraud was perpetrated; how it went unnoticed for a significant period of time; and how prevalent the issue is.

The article is here.

Wednesday, July 20, 2016

An NYU Study Gone Wrong, and a Top Researcher Dismissed

By Benedict Carey
The New York Times
Originally posted June 27, 2016

New York University’s medical school has quietly shut down eight studies at its prominent psychiatric research center and parted ways with a top researcher after discovering a series of violations in a study of an experimental, mind-altering drug.

A subsequent federal investigation found lax oversight of study participants, most of whom had serious mental issues. The Food and Drug Administration investigators also found that records had been falsified and researchers had failed to keep accurate case histories.

In one of the shuttered studies, people with a diagnosis of post-traumatic stress caused by childhood abuse took a relatively untested drug intended to mimic the effects of marijuana, to see if it relieved symptoms.

The article is here.

Saturday, June 11, 2016

Scientists Are Just as Confused About the Ethics of Big-Data Research as You

Sarah Zhang
Wired Magazine
Originally published May 20, 2016

Here is an excerpt:

Shockingly, though, the researchers behind both of those big data blowups never anticipated public outrage. (The OkCupid research does not seem to have gone through any kind of ethical review process, and a Cornell ethics review board approved the Facebook experiment.) And that shows just how untested the ethics of this new field of research is. Unlike medical research, which has been shaped by decades of clinical trials, the risks—and rewards—of analyzing big, semi-public databases are just beginning to become clear.

And the patchwork of review boards responsible for overseeing those risks are only slowly inching into the 21st century. Under the Common Rule in the US, federally funded research has to go through ethical review. Rather than one unified system though, every single university has its own institutional review board, or IRB. Most IRB members are researchers at the university, most often in the biomedical sciences. Few are professional ethicists.

The article is here.

Thursday, June 2, 2016

Scientific consent, data, and doubling down on the internet

Oliver Keyes
Originally published May 12, 2016

Here is an excerpt:

The Data

Yesterday morning I woke up to a Twitter friend pointing me to a release of OKCupid data, by Kirkegaard. Having now spent some time exploring the data, and reading both public statements on the work and the associated paper: this is without a doubt one of the most grossly unprofessional, unethical and reprehensible data releases I have ever seen.

There are two reasons for that. The first is very simple; Kirkegaard never asked anyone. He didn't ask OKCupid, he didn't ask the users covered by the dataset - he simply said 'this is public so people should expect it's going to be released'.

The blog post is here.

Sunday, May 29, 2016

The job of ‘ethics committees’ should be ethically informed code consistency review

Søren Holm
J Med Ethics doi:10.1136/medethics-2015-103343

Moore and Donnelly argue in the paper ‘The job of “ethics committees”’ that research ethics committees should be renamed and that their job should be specified as “review of proposals for consistency with the duly established and applicable code” only.  They raise a large number of issues, but in this comment I briefly want to suggest that two of their arguments are fundamentally flawed.

The first flawed argument is the argument related to the separation of powers. Moore and Donnelly proceed from the premise that it is pro tanto better to have an institutional arrangement that separates code-making powers and decisional powers, and then proceed to argue that this separation is not feasible for what they call ‘ethics consistency review’ because “no matter who established any prespecified review standards, the review decision maker must be empowered at review to revise those standards when this would make for an ethical improvement.

The response article is here.

Thursday, February 4, 2016

French drug trial leaves one brain dead and five critically ill

By Angelique Chrisafis
The Guardian
Originally published January 15, 2916

Here is an excerpt:

Touraine said the study was a phase one clinical trial, in which healthy volunteers take the medication to “evaluate the safety of its use, tolerance and pharmacological profile of the molecule”.

Medical trials typically have three phases to assess a new drug or device for safety and effectiveness. Phase one entails a small group of volunteers and focuses only on safety. Phase two and three are progressively larger trials to assess the drug’s effectiveness, although safety remains paramount.

Testing had already been carried out on animals, including chimpanzees, starting in July, Touraine said.

Bial said it was committed to ensuring the wellbeing of test participants and was working with authorities to discover the cause of the injuries, adding that the clinical trial had been approved by French regulators.

The story is here.