Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Consent. Show all posts
Showing posts with label Consent. Show all posts

Thursday, March 7, 2024

Canada Postpones Plan to Allow Euthanasia for Mentally Ill

Craig McCulloh
Voice of America News
Originally posted 8 Feb 24

The Canadian government is delaying access to medically assisted death for people with mental illness.

Those suffering from mental illness were supposed to be able to access Medical Assistance in Dying — also known as MAID — starting March 17. The recent announcement by the government of Canadian Prime Minister Justin Trudeau was the second delay after original legislation authorizing the practice passed in 2021.

The delay came in response to a recommendation by a majority of the members of a committee made up of senators and members of Parliament.

One of the most high-profile proponents of MAID is British Columbia-based lawyer Chris Considine. In the mid-1990s, he represented Sue Rodriguez, who was dying from amyotrophic lateral sclerosis, commonly known as ALS.

Their bid for approval of a medically assisted death was rejected at the time by the Supreme Court of Canada. But a law passed in 2016 legalized euthanasia for individuals with terminal conditions. From then until 2022, more than 45,000 people chose to die.


Summary:

Canada originally planned to expand its Medical Assistance in Dying (MAiD) program to include individuals with mental illnesses in March 2024.
  • This plan has been postponed until 2027 due to concerns about the healthcare system's readiness and potential ethical issues.
  • The original legislation passed in 2021, but concerns about safeguards and mental health support led to delays.
  • This issue is complex and ethically charged, with advocates arguing for individual autonomy and opponents raising concerns about coercion and vulnerability.
I would be concerned about the following issues:
  • Vulnerability: Mental illness can impair judgement, raising concerns about informed consent and potential coercion.
  • Safeguards: Concerns exist about insufficient safeguards to prevent abuse or exploitation.
  • Mental health access: Limited access to adequate mental health treatment could contribute to undue pressure towards MAiD.
  • Social inequalities: Concerns exist about disproportionate access to MAiD based on socioeconomic background.

Monday, February 5, 2024

Should Patients Be Allowed to Die From Anorexia? Is a 'Palliative' Approach to Mental Illness Ethical?

Katie Engelhart
New York Times Magazine
Originally posted 3 Jan 24

Here is an excerpt:

He came to think that he had been impelled by a kind of professional hubris — a hubris particular to psychiatrists, who never seemed to acknowledge that some patients just could not get better. That psychiatry had actual therapeutic limits. Yager wanted to find a different path. In academic journals, he came across a small body of literature, mostly theoretical, on the idea of palliative psychiatry. The approach offered a way for him to be with patients without trying to make them better: to not abandon the people who couldn’t seem to be fixed. “I developed this phrase of ‘compassionate witnessing,’” he told me. “That’s what priests did. That’s what physicians did 150 years ago when they didn’t have any tools. They would just sit at the bedside and be with somebody.”

Yager believed that a certain kind of patient — maybe 1 or 2 percent of them — would benefit from entirely letting go of standard recovery-oriented care. Yager would want to know that such a patient had insight into her condition and her options. He would want to know that she had been in treatment in the past, not just once but several times. Still, he would not require her to have tried anything and everything before he brought her into palliative care. Even a very mentally ill person, he thought, was allowed to have ideas about what she could and could not tolerate.

If the patient had a comorbidity, like depression, Yager would want to know that it was being treated. Maybe, for some patients, treating their depression would be enough to let them keep fighting. But he wouldn’t insist that a person be depression-free before she left standard treatment. Not all depression can be cured, and many people are depressed and make decisions for themselves every day. It would be Yager’s job to tease out whether what the patient said she wanted was what she authentically desired, or was instead an expression of pathological despair. Or more: a suicidal yearning. Or something different: a cry for help. That was always part of the job: to root around for authenticity in the morass of a disease.


Some thoughts:

The question of whether patients with anorexia nervosa should be allowed to die from their illness or receive palliative care is a complex and emotionally charged one, lacking easy answers. It delves into the profound depths of autonomy, mental health, and the very meaning of life itself.

The Anorexic's Dilemma:

Anorexia nervosa is a severe eating disorder characterized by a relentless pursuit of thinness and an intense fear of weight gain. It often manifests in severe food restriction, excessive exercise, and distorted body image. This relentless control, however, comes at a devastating cost. Organ failure, malnutrition, and even death can be the tragic consequences of the disease's progression.

Palliative Care: Comfort Not Cure:

Palliative care focuses on symptom management and improving quality of life for individuals with life-threatening illnesses. In the context of anorexia, it would involve addressing physical discomfort, emotional distress, and spiritual concerns, but without actively aiming for weight gain or cure. This raises numerous ethical and practical questions:
  • Respecting Autonomy: Does respecting a patient's autonomy mean allowing them to choose a path that may lead to death, even if their decision is influenced by a mental illness?
  • The Line Between Choice and Coercion: How do we differentiate between a genuine desire for death and succumbing to the distorted thinking patterns of anorexia?
  • Futility vs. Hope: When is treatment considered futile, and when should hope for recovery, however slim, be prioritized?
Finding the Middle Ground:

There's no one-size-fits-all answer to this intricate dilemma. Each case demands individual consideration, taking into account the patient's mental capacity, level of understanding, and potential for recovery. Open communication, involving the patient, their family, and a multidisciplinary team of healthcare professionals, is crucial in navigating this sensitive terrain.

Potential Approaches:
  • Enhanced Supportive Care: Focusing on improving the patient's quality of life through pain management, emotional support, and addressing underlying psychological issues.
  • Conditional Palliative Care: Providing palliative care while continuing to offer and encourage life-sustaining treatment, with the possibility of transitioning back to active recovery if the patient shows signs of willingness.
  • Advance Directives: Encouraging patients to discuss their wishes and preferences beforehand, allowing for informed decision-making when faced with difficult choices.

Wednesday, January 10, 2024

Indigenous data sovereignty—A new take on an old theme

Tahu Kukutai (2023).
Science, 382.
DOI:10.1126/science.adl4664

A new kind of data revolution is unfolding around the world, one that is unlikely to be on the radar of tech giants and the power brokers of Silicon Valley. Indigenous Data Sovereignty (IDSov) is a rallying cry for Indigenous communities seeking to regain control over their information while pushing back against data colonialism and its myriad harms. Led by Indigenous academics, innovators, and knowledge-holders, IDSov networks now exist in the United States, Canada, Aotearoa (New Zealand), Australia, the Pacific, and Scandinavia, along with an international umbrella group, the Global Indigenous Data Alliance (GIDA). Together, these networks advocate for the rights of Indigenous Peoples over data that derive from them and that pertain to Nation membership, knowledge systems, customs, or territories. This lens on data sovereignty not only exceeds narrow notions of sovereignty as data localization and jurisdictional rights but also upends the assumption that the nation state is the legitimate locus of power. IDSov has thus become an important catalyst for broader conversations about what Indigenous sovereignty means in a digital world and how some measure of self-determination can be achieved under the weight of Big Tech dominance.

Indigenous Peoples are, of course, no strangers to struggles for sovereignty. There are an estimated 476 million Indigenous Peoples worldwide; the actual number is unknown because many governments do not separately identify Indigenous Peoples in their national data collections such as the population census. Colonial legacies of racism; land dispossession; and the suppression of Indigenous cultures, languages, and knowledges have had profound impacts. For example, although Indigenous Peoples make up just 6% of the global population, they account for about 20% of the world’s extreme poor. Despite this, Indigenous Peoples continue to assert their sovereignty and to uphold their responsibilities as protectors and stewards of their lands, waters, and knowledges.

The rest of the article is here.

Here is a brief summary:

This is an article about Indigenous data sovereignty. It discusses the importance of Indigenous communities having control over their own data. This is because data can be used to exploit and harm Indigenous communities. Indigenous data sovereignty is a way for Indigenous communities to protect themselves from this harm. There are a number of principles that guide Indigenous data sovereignty, including collective consent and the importance of upholding cultural protocols. Indigenous data sovereignty is still in its early stages, but it has the potential to be a powerful tool for Indigenous communities.

Saturday, October 21, 2023

Should Trackable Pill Technologies Be Used to Facilitate Adherence Among Patients Without Insight?

Tahir Rahman
AMA J Ethics. 2019;21(4):E332-336.
doi: 10.1001/amajethics.2019.332.

Abstract

Aripiprazole tablets with sensor offer a new wireless trackable form of aripiprazole that represents a clear departure from existing drug delivery systems, routes, or formulations. This tracking technology raises concerns about the ethical treatment of patients with psychosis when it could introduce unintended treatment challenges. The use of “trackable” pills and other “smart” drugs or nanodrugs assumes renewed importance given that physicians are responsible for determining patients’ decision-making capacity. Psychiatrists are uniquely positioned in society to advocate on behalf of vulnerable patients with mental health disorders. The case presented here focuses on guidance for capacity determination and informed consent for such nanodrugs.

(cut)

Ethics and Nanodrug Prescribing

Clinicians often struggle with improving treatment adherence in patients with psychosis who lack insight and decision-making capacity, so trackable nanodrugs, even though not proven to improve compliance, are worth considering. At the same time, guidelines are lacking to help clinicians determine which patients are appropriate for trackable nanodrug prescribing. The introduction of an actual tracking device in a patient who suffers from delusions of an imagined tracking device, like Mr A, raises specific ethical concerns. Clinicians have widely accepted the premise that confronting delusions is countertherapeuti The introduction of trackable pill technology could similarly introduce unintended harms. Paul Appelbaum has argued that “with paranoid patients often worried about being monitored or tracked, giving them a pill that does exactly that is an odd approach to treatment. The fear of invasion of privacy might discourage some patients from being compliant with their medical care and thus foster distrust of all psychiatric services. A good therapeutic relationship (often with family, friends, or a guardian involved) is critical to the patient’s engaging in ongoing psychiatric services.

The use of trackable pill technology to improve compliance deserves further scrutiny, as continued reliance on informal, physician determinations of decision-making capacity remain a standard practice. Most patients are not yet accustomed to the idea of ingesting a trackable pill. Therefore, explanation of the intervention must be incorporated into the informed consent process, assuming the patient has decision-making capacity. Since patients may have concerns about the collected data being stored on a device, clinicians might have to answer questions regarding potential breaches of confidentiality. They will also have to contend with clinical implications of acquiring patient treatment compliance data and justifying decisions based on such information. Below is a practical guide to aid clinicians in appropriate use of this technology.

Friday, September 22, 2023

Police are Getting DNA Data from People who Think They Opted Out

Jordan Smith
The Intercept
Originally posted 18 Aug 23

Here is an excerpt:

The communications are a disturbing example of how genetic genealogists and their law enforcement partners, in their zeal to close criminal cases, skirt privacy rules put in place by DNA database companies to protect their customers. How common these practices are remains unknown, in part because police and prosecutors have fought to keep details of genetic investigations from being turned over to criminal defendants. As commercial DNA databases grow, and the use of forensic genetic genealogy as a crime-fighting tool expands, experts say the genetic privacy of millions of Americans is in jeopardy.

Moore did not respond to The Intercept’s requests for comment.

To Tiffany Roy, a DNA expert and lawyer, the fact that genetic genealogists have accessed private profiles — while simultaneously preaching about ethics — is troubling. “If we can’t trust these practitioners, we certainly cannot trust law enforcement,” she said. “These investigations have serious consequences; they involve people who have never been suspected of a crime.” At the very least, law enforcement actors should have a warrant to conduct a genetic genealogy search, she said. “Anything less is a serious violation of privacy.”

(cut)

Exploitation of the GEDmatch loophole isn’t the only example of genetic genealogists and their law enforcement partners playing fast and loose with the rules.

Law enforcement officers have used genetic genealogy to solve crimes that aren’t eligible for genetic investigation per company terms of service and Justice Department guidelines, which say the practice should be reserved for violent crimes like rape and murder only when all other “reasonable” avenues of investigation have failed. In May, CNN reported on a U.S. marshal who used genetic genealogy to solve a decades-old prison break in Nebraska. There is no prison break exception to the eligibility rules, Larkin noted in a post on her website. “This case should never have used forensic genetic genealogy in the first place.”

A month later, Larkin wrote about another violation, this time in a California case. The FBI and the Riverside County Regional Cold Case Homicide Team had identified the victim of a 1996 homicide using the MyHeritage database — an explicit violation of the company’s terms of service, which make clear that using the database for law enforcement purposes is “strictly prohibited” absent a court order.

“The case presents an example of ‘noble cause bias,’” Larkin wrote, “in which the investigators seem to feel that their objective is so worthy that they can break the rules in place to protect others.”


My take:

Forensic genetic genealogists have been skirting GEDmatch privacy rules by searching users who explicitly opted out of sharing DNA with law enforcement. This means that police can access the DNA of people who thought they were protecting their privacy by opting out of law enforcement searches.

The practice of forensic genetic genealogy has been used to solve a number of cold cases, but it has also raised concerns about privacy and civil liberties. Some people worry that the police could use DNA data to target innocent people or to build a genetic database of the entire population.

GEDmatch has since changed its privacy policy to make it more difficult for police to access DNA data from users who have opted out. However, the damage may already be done. Police have already used GEDmatch data to solve dozens of cases, and it is unclear how many people have had their DNA data accessed without their knowledge or consent.

Friday, June 23, 2023

In the US, patient data privacy is an illusion

Harlan M Krumholz
Opinion
BMJ 2023;381:p1225

Here is an excerpt:

The regulation allows anyone involved in a patient’s care to access health information about them. It is based on the paternalistic assumption that for any healthcare provider or related associate to be able to provide care for a patient, unfettered access to all of that individual’s health records is required, regardless of the patient’s preference. This provision removes control from the patient’s hands for choices that should be theirs alone to make. For example, the pop-up covid testing service you may have used can claim to be an entity involved in your care and gain access to your data. This access can be bought through many for-profit companies. The urgent care centre you visited for your bruised ankle can access all your data. The team conducting your prenatal testing is considered involved in your care and can access your records. Health insurance companies can obtain all the records. And these are just a few examples.

Moreover, health systems legally transmit sensitive information with partners, affiliates, and vendors through Business Associate Agreements. But patients may not want their sensitive information disseminated—they may not want all their identified data transmitted to a third party through contracts that enable those companies to sell their personal information if the data are de-identified. And importantly, with all the advances in data science, effectively de-identifying detailed health information is almost impossible.

HIPAA confers ample latitude to these third parties. As a result, companies make massive profits from the sale of data. Some companies claim to be able to provide comprehensive health information on more than 300 million Americans—most of the American public—for a price. These companies' business models are legal, yet most patients remain in the dark about what may be happening to their data.

However, massive accumulations of medical data do have the potential to produce insights into medical problems and accelerate progress towards better outcomes. And many uses of a patient’s data, despite moving throughout the healthcare ecosystem without their knowledge, may nevertheless help advance new diagnostics and therapeutics. The critical questions surround the assumptions people should have about their health data and the disclosures that should be made before a patient speaks with a health professional. Should each person be notified before interacting with a healthcare provider about what may happen with the information they share or the data their tests reveal? Are there new technologies that could help patients regain control over their data?

Although no one would relish a return to paper records, that cumbersome system at least made it difficult for patients’ data to be made into a commodity. The digital transformation of healthcare data has enabled wonderous breakthroughs—but at the cost of our privacy. And as computational power and more clever means of moving and organising data emerge, the likelihood of permission-based privacy will recede even further.

Saturday, January 7, 2023

Artificial intelligence and consent: a feminist anti-colonial critique

Varon, J., & Peña, P. (2021). 
Internet Policy Review, 10(4).
https://doi.org/10.14763/2021.4.1602

Abstract

Feminist theories have extensively debated consent in sexual and political contexts. But what does it mean to consent when we are talking about our data bodies feeding artificial intelligence (AI) systems? This article builds a feminist and anti-colonial critique about how an individualistic notion of consent is being used to legitimate practices of the so-called emerging Digital Welfare States, focused on digitalisation of anti-poverty programmes. The goal is to expose how the functional role of digital consent has been enabling data extractivist practices for control and exclusion, another manifestation of colonialism embedded in cutting-edge digital technology.

Here is an excerpt:

Another important criticism of this traditional idea of consent in sexual relationships is the forced binarism of yes/no. According to Gira Grant (2016), consent is not only given but also is built from multiple factors such as the location, the moment, the emotional state, trust, and desire. In fact, for this author, the example of sex workers could demonstrate how desire and consent are different, although sometimes confused as the same. For her there are many things that sex workers do without necessarily wanting to. However, they give consent for legitimate reasons.

It is also important how we express consent. For feminists such as Fraisse (2012), there is no consent without the body. In other words, consent has a relational and communication-based (verbal and nonverbal) dimension where power relationships matter (Tinat, 2012; Fraisse, 2012). This is very relevant when we discuss “tacit consent” in sexual relationships. In another dimension of how we express consent, Fraisse (2012) distinguishes between choice (the consent that is accepted and adhered to) and coercion (the "consent" that is allowed and endured).

According to Fraisse (2012), the critical view of consent that is currently claimed by feminist theories is not consent as a symptom of contemporary individualism; it has a collective approach through the idea of “the ethics of consent”, which provides attention to the "conditions" of the practice; the practice adapted to a contextual situation, therefore rejecting universal norms that ignore the diversified conditions of domination (Fraisse, 2012).

In the same sense, Lucia Melgar (2012) asserts that, in the case of sexual consent, it is not just an individual right, but a collective right of women to say "my body is mine" and from there it claims freedom to all bodies. As Sarah Ahmed (2017, n.p.) states “for feminism: no is a political labor”. In other words, “if your position is precarious you might not be able to afford no. [...] This is why the less precarious might have a political obligation to say no on behalf of or alongside those who are more precarious”. Referring to Éric Fassin, Fraisse (2012) understands that in this feminist view, consent will not be “liberal” anymore (as a refrain of the free individual), but “radical”, because, as Fassin would call, seeing in a collective act, it could function as some sort of consensual exchange of power.

Tuesday, September 27, 2022

Beyond individualism: Is there a place for relational autonomy in clinical practice and research?

Dove, E. S., Kelly, S. E., et al. (2017).
Clinical Ethics, 12(3), 150–165.
https://doi.org/10.1177/1477750917704156

Abstract

The dominant, individualistic understanding of autonomy that features in clinical practice and research is underpinned by the idea that people are, in their ideal form, independent, self-interested and rational gain-maximising decision-makers. In recent decades, this paradigm has been challenged from various disciplinary and intellectual directions. Proponents of ‘relational autonomy’ in particular have argued that people’s identities, needs, interests – and indeed autonomy – are always also shaped by their relations to others. Yet, despite the pronounced and nuanced critique directed at an individualistic understanding of autonomy, this critique has had very little effect on ethical and legal instruments in clinical practice and research so far. In this article, we use four case studies to explore to what extent, if at all, relational autonomy can provide solutions to ethical and practical problems in clinical practice and research. We conclude that certain forms of relational autonomy can have a tangible and positive impact on clinical practice and research. These solutions leave the ultimate decision to the person most affected, but encourage and facilitate the consideration of this person’s care and responsibility for connected others.

From the Discussion section

Together, these cases show that in our quest to enhance the practical value of the concept of relational autonomy in healthcare and research, we must be careful not to remove the patient or participant from the centre of decision-making. At the same time, we should acknowledge that the patient’s decision to consent (or refuse) to treatment or research can be augmented by facilitating and encouraging that her relations to, and responsibility for, others are considered in decision-making processes. Our case studies do not suggest that we should expand consent requirements to others per se, such as family members or community elders – that is, to add the requirement of seeking consent from further individuals who may also be seen as having a stake in the decision. Such a position would undermine the idea that the person who is centrally affected by a decision should typically have the final say in what happens with and to her, or her body, or even her data. As long as this general principle respects all legal exceptions (see below), we believe that it is a critical underpinning of fundamental respect for persons that should not done away with. Moreover, expanding consent or requiring consent to include others (however so defined) undermines the main objective of relational autonomy, which is to foreground the relational aspect of human identities and interests, and not merely to expand the range of individuals who need to give consent to a procedure. An approach that merely extends consent requirements to other people does not foreground relations but rather presumptions about who the relevant others of a person are.

Sunday, February 14, 2021

Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable?

Frank, L., Nyholm, S. 
Artif Intell Law 25, 305–323 (2017).
https://doi.org/10.1007/s10506-017-9212-y

Abstract

The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. We consider reasons for giving both “no” and “yes” answers to these three questions by examining the concept of consent in general, as well as critiques of its adequacy in the domain of sexual ethics; the relationship between consent and free will; and the relationship between consent and consciousness. Additionally we canvass the most influential existing literature on the ethics of sex with robots.

Here is an excerpt:

Here, we want to ask a similar question regarding how and whether sex robots should be brought into the legal community. Our overarching question is: is it conceivable, possible, and desirable to create autonomous and smart sex robots that are able to give (or withhold) consent to sex with a human person? For each of these three sub-questions (whether it is conceivable, possible, and desirable to create sex robots that can consent) we consider both “no” and “yes” answers. We are here mainly interested in exploring these questions in general terms and motivating further discussion. However, in discussing each of these sub-questions we will argue that, prima facie, the “yes” answers appear more convincing than the “no” answers—at least if the sex robots are of a highly sophisticated sort.Footnote4

The rest of our discussion divides into the following sections. We start by saying a little more about what we understand by a “sex robot”. We also say more about what consent is, and we review the small literature that is starting to emerge on our topic (Sect. 1). We then turn to the questions of whether it is conceivable, possible, and desirable to create sex robots capable of giving consent—and discuss “no” and “yes” answers to all of these questions. When we discuss the case for considering it desirable to require robotic consent to sex, we argue that there can be both non-instrumental and instrumental reasons in favor of such a requirement (Sects. 2–4). We conclude with a brief summary (Sect. 5).

Thursday, January 7, 2021

How Might Artificial Intelligence Applications Impact Risk Management?

John Banja
AMA J Ethics. 2020;22(11):E945-951. 

Abstract

Artificial intelligence (AI) applications have attracted considerable ethical attention for good reasons. Although AI models might advance human welfare in unprecedented ways, progress will not occur without substantial risks. This article considers 3 such risks: system malfunctions, privacy protections, and consent to data repurposing. To meet these challenges, traditional risk managers will likely need to collaborate intensively with computer scientists, bioinformaticists, information technologists, and data privacy and security experts. This essay will speculate on the degree to which these AI risks might be embraced or dismissed by risk management. In any event, it seems that integration of AI models into health care operations will almost certainly introduce, if not new forms of risk, then a dramatically heightened magnitude of risk that will have to be managed.

AI Risks in Health Care

Artificial intelligence (AI) applications in health care have attracted enormous attention as well as immense public and private sector investment in the last few years.1 The anticipation is that AI technologies will dramatically alter—perhaps overhaul—health care practices and delivery. At the very least, hospitals and clinics will likely begin importing numerous AI models, especially “deep learning” varieties that draw on aggregate data, over the next decade.

A great deal of the ethics literature on AI has recently focused on the accuracy and fairness of algorithms, worries over privacy and confidentiality, “black box” decisional unexplainability, concerns over “big data” on which deep learning AI models depend, AI literacy, and the like. Although some of these risks, such as security breaches of medical records, have been around for some time, their materialization in AI applications will likely present large-scale privacy and confidentiality risks. AI models have already posed enormous challenges to hospitals and facilities by way of cyberattacks on protected health information, and they will introduce new ethical obligations for providers who might wish to share patient data or sell it to others. Because AI models are themselves dependent on hardware, software, algorithmic development and accuracy, implementation, data sharing and storage, continuous upgrading, and the like, risk management will find itself confronted with a new panoply of liability risks. On the one hand, risk management can choose to address these new risks by developing mitigation strategies. On the other hand, because these AI risks present a novel landscape of risk that might be quite unfamiliar, risk management might choose to leave certain of those challenges to others. This essay will discuss this “approach-avoidance” possibility in connection with 3 categories of risk—system malfunctions, privacy breaches, and consent to data repurposing—and conclude with some speculations on how those decisions might play out.

Saturday, December 26, 2020

Baby God: how DNA testing uncovered a shocking web of fertility fraud

Arian Horton
The Guardian
Originally published 2 Dec 20

Here ate two excerpts:

The database unmasked, with detached clarity, a dark secret hidden in plain sight for decades: the physician once named Nevada’s doctor of the year, who died in 2006 at age 94, had impregnated numerous patients with his own sperm, unbeknownst to the women or their families. The decades-long fertility fraud scheme, unspooled in the HBO documentary Baby God, left a swath of families – 26 children as of this writing, spanning 40 years of the doctor’s treatments – shocked at long-obscured medical betrayal, unmoored from assumptions of family history and stumbling over the most essential questions of identity. Who are you, when half your DNA is not what you thought?

(cut)

That reality – a once unknowable crime now made plainly knowable – has now come to pass, and the film features interviews with several of Fortier’s previously unknown children, each grappling with and tracing their way into a new web of half-siblings, questions of lineage and inheritance, and reframing of family history. Babst, who started as a cop at 19, dove into her own investigation, sourcing records on Dr Fortier that eventually revealed allegations of sexual abuse and molestation against his own stepchildren.

Brad Gulko, a human genomics scientist in San Francisco who bears a striking resemblance to the young Fortier, initially approached the revelation from the clinical perspective of biological motivations for procreation. “I feel like Dr Fortier found a way to justify in his own mind doing what he wanted to do that didn’t violate his ethical norms too much, even if he pushed them really hard,” he says in the film. “I’m still struggling with that. I don’t know where I’ll end up.”

The film quickly morphed, according to Olson, from an investigation of the Fortier case and his potential motivations to the larger, unresolvable questions of identity, nature versus nurture. “At first it was like ‘let’s get all the facts, we’re going to figure it out, what are his motivations, it will be super clear,’” said Olson. 

Sunday, May 31, 2020

The Answer to a COVID-19 Vaccine May Lie in Our Genes, But ...

Ifeoma Ajunwa & Forrest Briscoe
Scientific American
Originally posted 13 May 2020

Here is an excerpt:

Although the rationale for expanded genetic testing is obviously meant for the greater good, such testing could also bring with it a host of privacy and economic harms. In the past, genetic testing has also been associated with employment discrimination. Even before the current crisis, companies like 23andMe and Ancestry assembled and started operating their own private long-term large-scale databases of U.S. citizens’ genetic and health data. 23andMe and Ancestry recently announced they would use their databases to identify genetic factors that predict COVID-19 susceptibility.

Other companies are growing similar databases, for a range of purposes. And the NIH’s AllofUs program is constructing a genetic database, owned by the federal government, in which data from one million people will be used to study various diseases. These new developments indicate an urgent need for appropriate genetic data governance.

Leaders from the biomedical research community recently proposed a voluntary code of conduct for organizations constructing and sharing genetic databases. We believe that the public has a right to understand the risks of genetic databases and a right to have a say in how those databases will be governed. To ascertain public expectations about genetic data governance, we surveyed over two thousand (n=2,020) individuals who altogether are representative of the general U.S. population. After educating respondents about the key benefits and risks associated with DNA databases—using information from recent mainstream news reports—we asked how willing they would be to provide their DNA data for such a database.

The info is here.

Wednesday, February 26, 2020

Ethical and Legal Aspects of Ambient Intelligence in Hospitals

Gerke S, Yeung S, Cohen IG.
JAMA. Published online January 24, 2020.
doi:10.1001/jama.2019.21699

Ambient intelligence in hospitals is an emerging form of technology characterized by a constant awareness of activity in designated physical spaces and of the use of that awareness to assist health care workers such as physicians and nurses in delivering quality care. Recently, advances in artificial intelligence (AI) and, in particular, computer vision, the domain of AI focused on machine interpretation of visual data, have propelled broad classes of ambient intelligence applications based on continuous video capture.

One important goal is for computer vision-driven ambient intelligence to serve as a constant and fatigue-free observer at the patient bedside, monitoring for deviations from intended bedside practices, such as reliable hand hygiene and central line insertions. While early studies took place in single patient rooms, more recent work has demonstrated ambient intelligence systems that can detect patient mobilization activities across 7 rooms in an ICU ward and detect hand hygiene activity across 2 wards in 2 hospitals.

As computer vision–driven ambient intelligence accelerates toward a future when its capabilities will most likely be widely adopted in hospitals, it also raises new ethical and legal questions. Although some of these concerns are familiar from other health surveillance technologies, what is distinct about ambient intelligence is that the technology not only captures video data as many surveillance systems do but does so by targeting the physical spaces where sensitive patient care activities take place, and furthermore interprets the video such that the behaviors of patients, health care workers, and visitors may be constantly analyzed. This Viewpoint focuses on 3 specific concerns: (1) privacy and reidentification risk, (2) consent, and (3) liability.

The info is here.

Saturday, February 22, 2020

Hospitals Give Tech Giants Access to Detailed Medical Records

Melanie Evans
The Wall Street Journal
Originally published 20 Jan 20

Here is an excerpt:

Recent revelations that Alphabet Inc.’s Google is able to tap personally identifiable medical data about patients, reported by The Wall Street Journal, has raised concerns among lawmakers, patients and doctors about privacy.

The Journal also recently reported that Google has access to more records than first disclosed in a deal with the Mayo Clinic.

Mayo officials say the deal allows the Rochester, Minn., hospital system to share personal information, though it has no current plans to do so.

“It was not our intention to mislead the public,” said Cris Ross, Mayo’s chief information officer.

Dr. David Feinberg, head of Google Health, said Google is one of many companies with hospital agreements that allow the sharing of personally identifiable medical data to test products used in treatment and operations.

(cut)

Amazon, Google, IBM and Microsoft are vying for hospitals’ business in the cloud storage market in part by offering algorithms and technology features. To create and launch algorithms, tech companies are striking separate deals for access to medical-record data for research, development and product pilots.

The Health Insurance Portability and Accountability Act, or HIPAA, lets hospitals confidentially send data to business partners related to health insurance, medical devices and other services.

The law requires hospitals to notify patients about health-data uses, but they don’t have to ask for permission.

Data that can identify patients—including name and Social Security number—can’t be shared unless such records are needed for treatment, payment or hospital operations. Deals with tech companies to develop apps and algorithms can fall under these broad umbrellas. Hospitals aren’t required to notify patients of specific deals.

The info is here.

Thursday, January 23, 2020

Colleges want freshmen to use mental health apps. But are they risking students’ privacy?

 (iStock)Deanna Paul
The New York Times
Originally posted 2 Jan 20

Here are two excepts:

TAO Connect is just one of dozens of mental health apps permeating college campuses in recent years. In addition to increasing the bandwidth of college counseling centers, the apps offer information and resources on mental health issues and wellness. But as student demand for mental health services grows, and more colleges turn to digital platforms, experts say universities must begin to consider their role as stewards of sensitive student information and the consequences of encouraging or mandating these technologies.

The rise in student wellness applications arrives as mental health problems among college students have dramatically increased. Three out of 5 U.S. college students experience overwhelming anxiety, and 2 in 5 students reported debilitating depression, according to a 2018 survey from the American College Health Association.

Even so, only about 15 percent of undergraduates seek help at a university counseling center. These apps have begun to fill students’ needs by providing ongoing access to traditional mental health services without barriers such as counselor availability or stigma.

(cut)

“If someone wants help, they don’t care how they get that help,” said Lynn E. Linde, chief knowledge and learning officer for the American Counseling Association. “They aren’t looking at whether this person is adequately credentialed and are they protecting my rights. They just want help immediately.”

Yet she worried that students may be giving up more information than they realize and about the level of coercion a school can exert by requiring students to accept terms of service they otherwise wouldn’t agree to.

“Millennials understand that with the use of their apps they’re giving up privacy rights. They don’t think to question it,” Linde said.

The info is here.

Tuesday, January 21, 2020

How Could Commercial Terms of Use and Privacy Policies Undermine Informed Consent in the Age of Mobile Health?

AMA J Ethics. 2018;20(9):E864-872.
doi: 10.1001/amajethics.2018.864.

Abstract

Granular personal data generated by mobile health (mHealth) technologies coupled with the complexity of mHealth systems creates risks to privacy that are difficult to foresee, understand, and communicate, especially for purposes of informed consent. Moreover, commercial terms of use, to which users are almost always required to agree, depart significantly from standards of informed consent. As data use scandals increasingly surface in the news, the field of mHealth must advocate for user-centered privacy and informed consent practices that motivate patients’ and research participants’ trust. We review the challenges and relevance of informed consent and discuss opportunities for creating new standards for user-centered informed consent processes in the age of mHealth.

The info is here.

Thursday, January 2, 2020

The Tricky Ethics of Google's Project Nightingale Effort

Cason Schmit
nextgov.com
Originally posted 3 Dec 19

The nation’s second-largest health system, Ascension, has agreed to allow the software behemoth Google access to tens of millions of patient records. The partnership, called Project Nightingale, aims to improve how information is used for patient care. Specifically, Ascension and Google are trying to build tools, including artificial intelligence and machine learning, “to make health records more useful, more accessible and more searchable” for doctors.

Ascension did not announce the partnership: The Wall Street Journal first reported it.

Patients and doctors have raised privacy concerns about the plan. Lack of notice to doctors and consent from patients are the primary concerns.

As a public health lawyer, I study the legal and ethical basis for using data to promote public health. Information can be used to identify health threats, understand how diseases spread and decide how to spend resources. But it’s more complicated than that.

The law deals with what can be done with data; this piece focuses on ethics, which asks what should be done.

Beyond Hippocrates

Big-data projects like this one should always be ethically scrutinized. However, data ethics debates are often narrowly focused on consent issues.

In fact, ethical determinations require balancing different, and sometimes competing, ethical principles. Sometimes it might be ethical to collect and use highly sensitive information without getting an individual’s consent.

The info is here.

Tuesday, December 24, 2019

DNA genealogical databases are a gold mine for police, but with few rules and little transparency

Paige St. John
The LA Times
Originally posted 24 Nov 19

Here is an excerpt:

But law enforcement has plunged into this new world with little to no rules or oversight, intense secrecy and by forming unusual alliances with private companies that collect the DNA, often from people interested not in helping close cold cases but learning their ethnic origins and ancestry.

A Times investigation found:
  • There is no uniform approach for when detectives turn to genealogical databases to solve cases. In some departments, they are to be used only as a last resort. Others are putting them at the center of their investigative process. Some, like Orlando, have no policies at all.
  • When DNA services were used, law enforcement generally declined to provide details to the public, including which companies detectives got the match from. The secrecy made it difficult to understand the extent to which privacy was invaded, how many people came under investigation, and what false leads were generated.
  • California prosecutors collaborated with a Texas genealogy company at the outset of what became a $2-million campaign to spotlight the heinous crimes they can solve with consumer DNA. Their goal is to encourage more people to make their DNA available to police matching.
There are growing concerns that the race to use genealogical databases will have serious consequences, from its inherent erosion of privacy to the implications of broadened police power.

In California, an innocent twin was thrown in jail. In Georgia, a mother was deceived into incriminating her son. In Texas, police met search guidelines by classifying a case as sexual assault but after an arrest only filed charges of burglary. And in the county that started the DNA race with the arrest of the Golden State killer suspect, prosecutors have persuaded a judge to treat unsuspecting genetic contributors as “confidential informants” and seal searches so consumers are not scared away from adding their own DNA to the forensic stockpile.

Tuesday, November 19, 2019

Medical board declines to act against fertility doctor who inseminated woman with his own sperm

Image result for dr. mcmorries texas
Dr. McMorries
Marie Saavedra and Mark Smith
wfaa.com
Originally posted Oct 28, 2019

The Texas Medical Board has declined to act against a fertility doctor who inseminated a woman with his own sperm rather than from a donor the mother selected.

Though Texas lawmakers have now made such an act illegal, the Texas Medical Board found the actions did not “fall below the acceptable standard of care,” and declined further review, according to a response to a complaint obtained by WFAA.

In a follow-up email, a spokesperson told WFAA the board was hamstrung because it can't review complaints for instances that happened seven years or more past the medical treatment. 

The complaint was filed on behalf of 32-year-old Eve Wiley, of Dallas, who only recently learned her biological father wasn't the sperm donor selected by her mother. Instead, Wiley discovered her biological father was her mother’s fertility doctor in Nacogdoches.

Now 65, Wiley's mother, Margo Williams, had sought help from Dr. Kim McMorries because her husband was infertile.

The info is here.

Sunday, September 22, 2019

The Ethics Of Hiding Your Data From the Machines

Molly Wood
wired.com
Originally posted August 22, 2019

Here is an excerpt:

There’s also a real and reasonable fear that companies or individuals will take ethical liberties in the name of pushing hard toward a good solution, like curing a disease or saving lives. This is not an abstract problem: The co-founder of Google’s artificial intelligence lab, DeepMind, was placed on leave earlier this week after some controversial decisions—one of which involved the illegal use of over 1.5 million hospital patient records in 2017.

So sticking with the medical kick I’m on here, I propose that companies work a little harder to imagine the worst-case scenario surrounding the data they’re collecting. Study the side effects like you would a drug for restless leg syndrome or acne or hepatitis, and offer us consumers a nice, long, terrifying list of potential outcomes so we actually know what we’re getting into.

And for we consumers, well, a blanket refusal to offer up our data to the AI gods isn’t necessarily the good choice either. I don’t want to be the person who refuses to contribute my genetic data via 23andMe to a massive research study that could, and I actually believe this is possible, lead to cures and treatments for diseases like Parkinson’s and Alzheimer’s and who knows what else.

I also think I deserve a realistic assessment of the potential for harm to find its way back to me, because I didn’t think through or wasn’t told all the potential implications of that choice—like how, let’s be honest, we all felt a little stung when we realized the 23andMe research would be through a partnership with drugmaker (and reliable drug price-hiker) GlaxoSmithKline. Drug companies, like targeted ads, are easy villains—even though this partnership actually could produce a Parkinson’s drug. But do we know what GSK’s privacy policy looks like? That deal was a level of sharing we didn’t necessarily expect.

The info is here.