Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Regulations. Show all posts
Showing posts with label Regulations. Show all posts

Tuesday, December 18, 2018

Super-smart designer babies could be on offer soon. But is that ethical?

A micro image of embryo selection for IVF.Philip Ball
The Guardian
Originally posted November 19, 2018

Here is an excerpt:


Before we start imagining a Gattaca-style future of genetic elites and underclasses, there’s some context needed. The company says it is only offering such testing to spot embryos with an IQ low enough to be classed as a disability, and won’t conduct analyses for high IQ. But the technology the company is using will permit that in principle, and co-founder Stephen Hsu, who has long advocated for the prediction of traits from genes, is quoted as saying: “If we don’t do it, some other company will.”

The development must be set, too, against what is already possible and permitted in IVF embryo screening. The procedure called pre-implantation genetic diagnosis (PGD) involves extracting cells from embryos at a very early stage and “reading” their genomes before choosing which to implant. It has been enabled by rapid advances in genome-sequencing technology, making the process fast and relatively cheap. In the UK, PGD is strictly regulated by the Human Fertilisation and Embryology Authority (HFEA), which permits its use to identify embryos with several hundred rare genetic diseases of which the parents are known to be carriers. PGD for other purposes is illegal.

The info is here.

Thursday, August 23, 2018

Designing a Roadmap to Ethical AI in Government


Joshua Entsminger, Mark Esposito, Terence Tse and Danny Goh
www.thersa.org
Originally posted July 23, 2018

Here is an excerpt:

When a decision was made using AI, we may not know whether or not the data was faulty; regardless, there will come a time when someone appeals a decision made by, or influenced by, AI-driven insights. People have the right to be informed that a significant decision concerning their lives was carried out with the help of an AI. Governments will need a better record of what companies and institutions use AI for making significant decisions to enforce this policy.

When specifically assessing a decision-making process of concern, the first step should be to determine whether or not the data set represents what the organisation wanted the AI to understand and make decisions about.

However, data sets, particularly easily available data sets, cover a limited range of situations, and inevitably, most AI will be confronted with situations they have not encountered before – the ethical issue is the framework by which decisions occur, and good data cannot secure that kind of ethical behavior by itself.

The blog post is here.

Thursday, August 16, 2018

Series of ethical stumbles tests NIH’s reliance on private sector for research funding

Lev Facher
STAT News
Originally published August 1, 2018

Here is an excerpt:

Now, the NIH is seeking to bounce back from the hit to its reputation — and to demonstrate that the failures of recent years are isolated incidents and not emblematic of a broader cultural problem. At the same time, some congressional aides have hinted at more aggressive oversight of the foundation through which the NIH takes on many of its partnerships.

NIH officials told STAT this week the agency is completing a plan to ensure better ethical compliance and better delineate the actual process for private-sector collaboration. The officials said the plan will be presented to an advisory committee in December.

Already, as STAT reported in April, the NIH proactively nixed a long-touted plan to accept roughly $200 million from pharmaceutical manufacturers to pursue research on pain and addiction treatment, with an explicit acknowledgement that involving companies being sued for their role in the crisis could taint the perception of the research.

NIH Director Francis Collins acknowledged the setbacks in an interview with STAT this week, but defended his staff’s efforts.

The info is here.

Saturday, August 11, 2018

Should we care that the sex robots are coming?

Kate Devlin
unhurd.com
Originally published July 12, 2018

Here is an excerpt:

There’s no evidence to suggest that human-human relationships will be damaged. Indeed, it may be a chance for people to experience feelings of love that they are otherwise denied, for any number of reasons. Whether or not that love is considered valid by society is a different matter. And while objectification is definitely an issue, it may be an avoidable one. Security and privacy breaches are a worry in any smart technologies, which puts a whole new spin on safe sex.

As for child sex robots – an abhorrent image – people have already been convicted for importing child-like sex dolls. But we shouldn’t shy from considering whether research might deem them useful in a clinical setting, such as testing rehabilitation success, as has been trialled with virtual reality.

While non-sexual care robots are already in use, it was only three months ago, that the race to produce the first commercially-available model was won by an lifeless sex doll with an animatronic head and an integrated AI chatbot called Harmony. She might look the part but she doesn’t move from the neck down. We are still a long way from Westworld.

Naturally, a niche market will be delighted at the prospect of bespoke robot pleasure to come. But many others are worried about the impact these machines will have on our own, human relationships. These concerns aren’t dispelled by the fact that the current form of the sex robot is a reductive, cartoonish stereotype of a woman: all big hair and bigger breasts.

The info is here.

Thursday, June 28, 2018

Are Most Clinical Trials Unethical?

Michel Shamy
American Council on Science and Health
Originally published May 21, 2018

Here is an excerpt:

Therefore, to render RCTs scientifically and ethically justifiable, certain conditions must be met. But what are they?

Much of the recent literature on the topic of RCT ethics references the concept of “equipoise,” which refers to uncertainty or disagreement in the medical community. Though it is widely cited, “equipoise” has been defined inconsistently, is not universally accepted, and can be difficult to operationalize. Most scientists agree that we should not do another study when the answer is known ahead of time; to do so would be redundant, wasteful, and ultimately harmful to patients. When some estimates suggest that as much as 85% of clinical research may be wasteful, there is a strong imperative to develop clear criteria for when RCTs are necessary. In the absence of such criteria, RCTs that are unnecessary may be allowed to proceed – and unnecessary RCTs are, by definition, unethical.

We have proposed a preliminary set of criteria to guide judgments about whether a proposed RCT is scientifically justified. Every RCT should (1) ask a clear question, (2) assert a specific hypothesis, and (3) ensure that the hypothesis has not already been answered by available knowledge, including non-randomized studies. Then, we examined a sample of high quality, published RCTs and found that only 44% met these criteria.

The information is here.

Monday, May 28, 2018

The ethics of experimenting with human brain tissue

Nita Farahany, and others
Nature
Originally published April 25, 2018

If researchers could create brain tissue in the laboratory that might appear to have conscious experiences or subjective phenomenal states, would that tissue deserve any of the protections routinely given to human or animal research subjects?

This question might seem outlandish. Certainly, today’s experimental models are far from having such capabilities. But various models are now being developed to better understand the human brain, including miniaturized, simplified versions of brain tissue grown in a dish from stem cells — brain organoids. And advances keep being made.

These models could provide a much more accurate representation of normal and abnormal human brain function and development than animal models can (although animal models will remain useful for many goals). In fact, the promise of brain surrogates is such that abandoning them seems itself unethical, given the vast amount of human suffering caused by neurological and psychiatric disorders, and given that most therapies for these diseases developed in animal models fail to work in people. Yet the closer the proxy gets to a functioning human brain, the more ethically problematic it becomes.

The information is here.


Friday, May 11, 2018

Samantha’s suffering: why sex machines should have rights too

Victoria Brooks
The Conversation
Originally posted April 5, 2018

Here is the conclusion:

Machines are indeed what we make them. This means we have an opportunity to avoid assumptions and prejudices brought about by the way we project human feelings and desires. But does this ethically entail that robots should be able to consent to or refuse sex, as human beings would?

The innovative philosophers and scientists Frank and Nyholm have found many legal reasons for answering both yes and no (a robot’s lack of human consciousness and legal personhood, and the “harm” principle, for example). Again, we find ourselves seeking to apply a very human law. But feelings of suffering outside of relationships, or identities accepted as the “norm”, are often illegitimised by law.

So a “legal” framework which has its origins in heteronormative desire does not necessarily construct the foundation of consent and sexual rights for robots. Rather, as the renowned post-human thinker Rosi Braidotti argues, we need an ethic, as opposed to a law, which helps us find a practical and sensitive way of deciding, taking into account emergences from cross-species relations. The kindness and empathy we feel toward Samantha may be a good place to begin.

The article is here.

Wednesday, March 7, 2018

The Squishy Ethics of Sex With Robots

Adam Rogers
Wired.com
Originally published February 2, 2018

Here is an excerpt:

Most of the world is ready to accept algorithm-enabled, internet-connected, virtual-reality-optimized sex machines with open arms (arms! I said arms!). The technology is evolving fast, which means two inbound waves of problems. Privacy and security, sure, but even solving those won’t answer two very hard questions: Can a robot consent to having sex with you? Can you consent to sex with it?

One thing that is unquestionable: There is a market. Either through licensing the teledildonics patent or risking lawsuits, several companies have tried to build sex technology that takes advantage of Bluetooth and the internet. “Remote connectivity allows people on opposite ends of the world to control each other’s dildo or sleeve device,” says Maxine Lynn, a patent attorney who writes the blog Unzipped: Sex, Tech, and the Law. “Then there’s also bidirectional control, which is going to be huge in the future. That’s when one sex toy controls the other sex toy and vice versa.”

Vibease, for example, makes a wearable that pulsates in time to synchronized digital books or a partner controlling an app. We-vibe makes vibrators that a partner can control, or set preset patterns. And so on.

The article is here.

Thursday, December 21, 2017

An AI That Can Build AI

Dom Galeon and Kristin Houser
Futurism.com
Originally published on December 1, 2017

Here is an excerpt:

Thankfully, world leaders are working fast to ensure such systems don’t lead to any sort of dystopian future.

Amazon, Facebook, Apple, and several others are all members of the Partnership on AI to Benefit People and Society, an organization focused on the responsible development of AI. The Institute of Electrical and Electronics Engineers (IEE) has proposed ethical standards for AI, and DeepMind, a research company owned by Google’s parent company Alphabet, recently announced the creation of group focused on the moral and ethical implications of AI.

Various governments are also working on regulations to prevent the use of AI for dangerous purposes, such as autonomous weapons, and so long as humans maintain control of the overall direction of AI development, the benefits of having an AI that can build AI should far outweigh any potential pitfalls.

The information is here.

Tuesday, December 19, 2017

Health Insurers Are Still Skimping On Mental Health Coverage

Jenny Gold
Kaiser Health News/NPR
Originally published November 30, 2017

It has been nearly a decade since Congress passed the Mental Health Parity And Addiction Equity Act, with its promise to make mental health and substance abuse treatment just as easy to get as care for any other condition. Yet today, amid an opioid epidemic and a spike in the suicide rate, patients are still struggling to get access to treatment.

That is the conclusion of a national study published Thursday by Milliman, a risk management and health care consulting company. The report was released by a coalition of mental health and addiction advocacy organizations.

Among the findings:
  • In 2015, behavioral care was four to six times more likely to be provided out-of-network than medical or surgical care.

  • Insurers paid primary care providers 20 percent more for the same types of care than they paid addiction and mental health care specialists, including psychiatrists.

  • State statistics vary widely. In New Jersey, 45 percent of office visits for behavioral health care were out-of-network. In Washington, D.C., it was 63 percent.
The researchers at Milliman examined two large national databases containing medical claim records from major insurers for PPOs — preferred provider organizations — covering nearly 42 million Americans in all 50 states and D.C. from 2013 to 2015.

The article is here.

Tuesday, October 3, 2017

VA About To Scrap Ethics Law That Helps Safeguards Veterans From Predatory For-Profit Colleges

Adam Linehan
Task and Purpose
Originally posted October 2, 2017

An ethics law that prohibits Department of Veterans Affairs employees from receiving money or owning a stake in for-profit colleges that rake in millions in G.I. Bill tuition has “illogical and unintended consequences,” according to VA, which is pushing to suspend the 50-year-old statute.

But veteran advocacy groups say suspending the law would make it easier for the for-profit education industry to exploit its biggest cash cow: veterans. 

In a proposal published in the Federal Register on Sept. 14, VA claims that the statute — which, according to The New York Times, was enacted following a string of scandals involving the for-profit education industry — is redundant due to the other conflict-of-interest laws that apply to all federal employees and provide sufficient safeguards.

Critics of the proposal, however, say that the statute provides additional regulations that protect against abuse and provide more transparency. 

“The statute is one of many important bipartisan reforms Congress implemented to protect G.I. Bill benefits from waste, fraud, and abuse,” William Hubbard, Student Veterans of America’s vice president of government affairs, said in an email to Task & Purpose. “A thoughtful and robust public conservation should be had to ensure that the interests of student veterans is the top of the priority list.”

The article is here.

Editor's Note: The swamp continues to grow under the current administration.

Wednesday, September 6, 2017

The Nuremberg Code 70 Years Later

Jonathan D. Moreno, Ulf Schmidt, and Steve Joffe
JAMA. Published online August 17, 2017.

Seventy years ago, on August 20, 1947, the International Medical Tribunal in Nuremberg, Germany, delivered its verdict in the trial of 23 doctors and bureaucrats accused of war crimes and crimes against humanity for their roles in cruel and often lethal concentration camp medical experiments. As part of its judgment, the court articulated a 10-point set of rules for the conduct of human experiments that has come to be known as the Nuremberg Code. Among other requirements, the code called for the “voluntary consent” of the human research subject, an assessment of risks and benefits, and assurances of competent investigators. These concepts have become an important reference point for the ethical conduct of medical research. Yet, there has in the past been considerable debate among scholars about the code’s authorship, scope, and legal standing in both civilian and military science. Nonetheless, the Nuremberg Code has undoubtedly been a milestone in the history of biomedical research ethics.1- 3

Writings on medical ethics, laws, and regulations in a number of jurisdictions and countries, including a detailed and sophisticated set of guidelines from the Reich Ministry of the Interior in 1931, set the stage for the code. The same focus on voluntariness and risk that characterizes the code also suffuses these guidelines. What distinguishes the code is its context. As lead prosecutor Telford Taylor emphasized, although the Doctors’ Trial was at its heart a murder trial, it clearly implicated the ethical practices of medical experimenters and, by extension, the medical profession’s relationship to the state understood as an organized community living under a particular political structure. The embrace of Nazi ideology by German physicians, and the subsequent participation of some of their most distinguished leaders in the camp experiments, demonstrates the importance of professional independence from and resistance to the ideological and geopolitical ambitions of the authoritarian state.

The article is here.

Sunday, March 19, 2017

Revamping the US Federal Common Rule: Modernizing Human Participant Research Regulations

James G. Hodge Jr. and Lawrence O. Gostin
JAMA. Published online February 22, 2017

On January 19, 2017, the Office for Human Research Protections (OHRP), Department of Health and Human Services, and 15 federal agencies published a final rule to modernize the Federal Policy for the Protection of Human Subjects (known as the “Common Rule”).1 Initially introduced more than a quarter century ago, the Common Rule predated modern scientific methods and findings, notably human genome research.

Research enterprises now encompass vast multicenter trials in both academia and the private sector. The volume, types, and availability of public/private data and biospecimens have increased exponentially. Federal agencies demanded more accountability, research investigators sought more flexibility, and human participants desired more control over research. Most rule changes become effective in 2018, giving institutions time for implementation.

The article is here.

Friday, December 11, 2015

A Controversial Rewrite For Rules To Protect Humans In Experiments

By Rob Stein
NPR Morning Edition
Originally posted November 25, 2015

Throughout history, atrocities have been committed in the name of medical research.

Nazi doctors experimented on concentration camp prisoners. American doctors let poor black men with syphilis go untreated in the Tuskegee study. The list goes on.

To protect people participating in medical research, the federal government decades ago put in place strict rules on the conduct of human experiments.

Now the Department of Health and Human Services is proposing a major revision of these regulations, known collectively as the Common Rule. It's the first change proposed in nearly a quarter-century.

"We're in a very, very different world than when these regulations were first written," says Dr. Jerry Menikoff, who heads the HHS Office of Human Research Protections. "The goal is to modernize the rules to make sure terrible things don't happen."

The article and audio file are here.