Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Ethics. Show all posts
Showing posts with label Ethics. Show all posts

Thursday, January 25, 2024

Listen, explain, involve, and evaluate: why respecting autonomy benefits suicidal patients

Samuel J. Knapp (2024)
Ethics & Behavior, 34:1, 18-27
DOI: 10.1080/10508422.2022.2152338

Abstract

Out of a concern for keeping suicidal patients alive, some psychotherapists may use hard persuasion or coercion to keep them in treatment. However, more recent evidence-supported interventions have made respect for patient autonomy a cornerstone, showing that the effective interventions that promote the wellbeing of suicidal patients also prioritize respect for patient autonomy. This article details how psychotherapists can incorporate respect for patient autonomy in the effective treatment of suicidal patients by listening to them, explaining treatments to them, involving them in decisions, and inviting evaluations from them on the process and progress of their treatment. It also describes how processes that respect patient autonomy can supplement interventions that directly address some of the drivers of suicide.

Public Impact Statement

Treatments for suicidal patients have improved in recent years, in part, because they emphasize promoting patient autonomy. This article explains why respecting patient autonomy is important in the treatment of suicidal patients and how psychotherapists can integrate respect for patient autonomy in their treatments.


Dr. Knapp's article discusses the importance of respecting patient autonomy in the treatment of suicidal patients within the framework of principle-based ethics. It highlights the ethical principles of beneficence, nonmaleficence, justice, respecting patient autonomy, and professional-patient relationships. The article emphasizes the challenges psychotherapists face in balancing the promotion of patient well-being with the need to respect autonomy, especially when dealing with suicidal patients.

Fear and stress in treating suicidal patients may lead psychotherapists to prioritize more restrictive interventions, potentially disregarding the importance of patient autonomy. The article argues that actions minimizing respect for patient autonomy may reflect a paternalistic attitude, which is implementing interventions without patient consent for the sake of well-being.

The problems associated with paternalistic interventions are discussed, emphasizing the importance of patients' internal motivation to change. The article advocates for autonomy-focused interventions, such as cognitive behavior therapy and dialectical behavior therapy, which have been shown to reduce suicide risk and improve outcomes. It suggests that involving patients in treatment decisions, listening to their experiences, and validating their feelings contribute to more effective interventions.

The article provides recommendations on how psychotherapists can respect patient autonomy, including listening carefully to patients, explaining treatment processes, involving patients in decisions, and inviting them to evaluate their progress. The ongoing nature of the informed consent process is stressed, along with the benefits of incorporating patient feedback into treatment. The article concludes by acknowledging the need for a balance between beneficence and respect for patient autonomy, particularly in cases of imminent danger, where temporary prioritization of beneficence may be necessary.

In summary, the article underscores the significance of respecting patient autonomy in the treatment of suicidal patients and provides practical guidance for psychotherapists to achieve this while promoting patient well-being.

Monday, January 22, 2024

Deciding for Patients Who Have Lost Decision-Making Capacity — Finding Common Ground in Medical Ethics

Bernard Lo
The New England Journal of Medicine
Originally published 16 Dec 23

Here is an excerpt:

Empirical studies...show that advance directives do not work as was hoped.2 Only a minority of patients complete them. Directives commonly are not well informed, because patients have serious misconceptions about life-sustaining interventions and about their own prognoses. Designated surrogates are often inaccurate in stating patients’ preferences in specific scenarios. Patient preferences commonly change over time. Patients often want surrogates to have leeway to override their prior statements. And when making decisions, surrogates frequently consider aspects of patient well-being to be more important than the patient’s previously stated preferences.

Conceptually, relying completely on an incompetent patient’s prior directives may be unsound. Often surrogates must extrapolate from the patient’s previous directives and statements to a situation that the patient did not foresee. Patients generally underestimate how well they can cope with and adapt to new situations.

So the standard approach shifted to advance care planning, a process for helping adults understand and communicate their values, goals, and preferences regarding future care. Advance care planning improves satisfaction with communication and reduces the risk of post-traumatic stress disorder, depression, or anxiety among surrogate decision makers.3 However, its use neither increases the likelihood that decisions are concordant with patients’ values and goals nor improves patients’ quality of life.3

Studies show that patients are less concerned about specific medical interventions than about clinical outcomes, burdens, and quality of life. Such evidence led advocates of advance care planning to begin focusing on preparing for in-the-moment decisions rather than documenting directives for medical interventions.

Many state legislatures rejected the strict requirements for surrogate decision making that Cruzan allowed. By 2004, 10 states allowed patients to appoint a health care proxy in a conversation with a physician as well as in formal documents. By 2016, 41 states — both conservative and liberal — had enacted laws allowing family members to act as health care surrogates for patients who lacked decision-making capacity and had not designated a health care proxy. Seven states included domestic partners or close friends on the list of acceptable surrogates.


Here is a quick summary:

Following the 1990 Supreme Court's Cruzan ruling, which emphasized clear evidence for life-sustaining treatment withdrawal, practices shifted. Advance directives like living wills gained popularity, but studies revealed their limitations. Advance care planning, focusing on communication and values, took hold. POLST forms were introduced for specific interventions, but studies show inconsistency with actual situations.

The emphasis is now on family decision-making and flexible guidelines. Rigid legal formalities have decreased, and surrogates consider not just past directives but also current situations and evolving values. Discussions involving patients, surrogates, and physicians are crucial. Different approaches like past commitments, current well-being, and "life story continuation" may be appropriate depending on the context.

The Cruzan framework is no longer the basis for medical ethics and law. Family decisions, flexible standards, and evolving values now guide care. This shift showcases how medical ethics can adapt through discussions, research, and legal changes. Finding common ground on critical issues in today's divided society remains a challenge, but it's more important than ever.

Sunday, January 21, 2024

Doctors With Histories of Big Malpractice Settlements Now Work for Insurers

P. Rucker, D. Armstrong, & D. Burke
Propublica.org
Originally published 15 Dec 23

Here is an excerpt:

Patients and the doctors who treat them don’t get to pick which medical director reviews their case. An anesthesiologist working for an insurer can overrule a patient’s oncologist. In other cases, the medical director might be a doctor like Kasemsap who has left clinical practice after multiple accusations of negligence.

As part of a yearlong series about how health plans refuse to pay for care, ProPublica and The Capitol Forum set out to examine who insurers picked for such important jobs.

Reporters could not find any comprehensive database of doctors working for insurance companies or any public listings by the insurers who employ them. Many health plans also farm out medical reviews to other companies that employ their own doctors. ProPublica and The Capitol Forum identified medical directors through regulatory filings, LinkedIn profiles, lawsuits and interviews with insurance industry insiders. Reporters then checked those names against malpractice databases, state licensing board actions and court filings in 17 states.

Among the findings: The Capitol Forum and ProPublica identified 12 insurance company doctors with either a history of multiple malpractice payments, a single payment in excess of $1 million or a disciplinary action by a state medical board.

One medical director settled malpractice cases with 11 patients, some of whom alleged he bungled their urology surgeries and left them incontinent. Another was reprimanded by a state medical board for behavior that it found to be deceptive and dishonest. A third settled a malpractice case for $1.8 million after failing to identify cancerous cells on a pathology slide, which delayed a diagnosis for a 27-year-old mother of two, who died less than a year after her cancer was finally discovered.

None of this would have been easily visible to patients seeking approvals for care or payment from insurers who relied on these medical directors.


The ethical implications in this article are staggering.  Here are some quick points:

Conflicted Care: In a concerning trend, some US insurers are employing doctors with past malpractice settlements to assess whether patients deserve coverage for recommended treatments.  So, do these still licensed reviewers actually understand best practices?

Financial Bias: Critics fear these doctors, having faced financial repercussions for past care decisions, might prioritize minimizing payouts over patient needs, potentially leading to denied claims and delayed care.  In other words, do the reviewers have an inherent bias against patients, given that former patients complained against them?

Transparency Concerns: The lack of clear disclosure about these doctors' backgrounds raises concerns about transparency and potential conflicts of interest within the healthcare system.

In essence, this is a horrible system to provide high quality medical review.

Saturday, January 20, 2024

Private equity is buying up health care, but the real problem is why doctors are selling

Yashaswini Singh & Christopher Whaley
The Hill
Originally published 21 Dec 23

Here is an excerpt:

But amid warnings that private equity is taking over health care and portrayals of financiers as greedy villains, we’re ignoring the reality that no one is coercing individual physicians to sell. Many doctors are eager to hand off their practices, and for not just for the payday. Running a private practice has become increasingly unsustainable, and alternative employment options, such as working for hospitals, are often unappealing. That leaves private equity as an attractive third path.

There are plenty of short-term steps that regulators should take to keep private equity firms in check. But the bigger problem we must address is why so many doctors feel the need to sell. The real solution to private equity in health care is to boost competition and address the pressures physicians are facing.

Consolidation in health care isn’t new. For decades, physician practices have been swallowed up by hospital systems. According to a study by the Physicians Advocacy Institute, nearly 75 percent of physicians now work for a hospital or corporate owner. While hospitals continue to drive consolidation, private equity is ramping up its spending and market share. One recent report found that private equity now owns more than 30 percent of practices in nearly one-third of metropolitan areas.

Years of study suggest that consolidation drives up health care costs without improving quality of care, and our research shows that private equity is no different. To deliver a high return to investors, private equity firms inflate charges and cut costs. One of our studies found that a few years after private equity invested in a practice, charges per patient were 50% higher than before. Practices also experience high turnover of physicians and increased hiring of non-physician staff.

How we got here has more to do with broader problems in health care than with private equity itself.


Here is my summary, which is really a warning:

The article dives into the concerning trend of private equity firms acquiring healthcare practices. It argues that while this might seem concerning, the bigger issue lies in understanding why doctors are willing to sell their practices in the first place.

The author highlights the immense financial burden doctors shoulder while running their own practices. Between rising costs and stagnant insurance reimbursements, it's becoming increasingly difficult for them to stay afloat. This, the article argues, is what's pushing them towards private equity firms, who offer immediate financial relief but often come with their own set of downsides for patients, like higher costs and reduced quality of care.

Therefore, instead of solely focusing on restricting private equity involvement, the article suggests we address the root cause: the financial woes of independent doctors. This could involve solutions like increased Medicare payments, tax breaks for independent practices, and alleviating the administrative burden doctors face. Only then can we ensure a sustainable healthcare system that prioritizes patient well-being.

Thursday, January 18, 2024

Biden administration rescinds much of Trump ‘conscience’ rule for health workers

Nathan Weixel
The Hill
Originally published 9 Jan 24

The Biden administration will largely undo a Trump-era rule that boosted the rights of medical workers to refuse to perform abortions or other services that conflicted with their religious or moral beliefs.

The final rule released Tuesday partially rescinds the Trump administration’s 2019 policy that would have stripped federal funding from health facilities that required workers to provide any service they objected to, such as abortions, contraception, gender-affirming care and sterilization.

The health care conscience protection statutes represent Congress’s attempt to strike a balance between maintaining access to health care and honoring religious beliefs and moral convictions, the Department of Health and Human Services said in the rule.

“Some doctors, nurses, and hospitals, for example, object for religious or moral reasons to providing or referring for abortions or assisted suicide, among other procedures. Respecting such objections honors liberty and human dignity,” the department said.

But at the same time, Health and Human Services said “patients also have rights and health needs, sometimes urgent ones. The Department will continue to respect the balance Congress struck, work to ensure individuals understand their conscience rights, and enforce the law.”


Summary from Healthcare Dive

The HHS Office of Civil Rights has again updated guidance on providers’ conscience rights. The latest iteration, announced on Tuesday, aims to strike a balance between honoring providers’ religious and moral beliefs and ensuring access to healthcare, according to the agency.

President George W. Bush created conscience rules in 2008, which codify the rights of healthcare workers to refuse to perform medical services that conflict with their religious or moral beliefs. Since then, subsequent administrations have rewritten the rules, with Democrats limiting the scope and Republicans expanding conscience protections. 

The most recent revision largely undoes a 2019 Trump-era policy — which never took effect — that sought to expand the rights of healthcare workers broadly to refuse to perform medical services, such as abortions, on religious or moral grounds.

Wednesday, January 3, 2024

Doctors Wrestle With A.I. in Patient Care, Citing Lax Oversight

Christina Jewett
The New York Times
Originally posted 30 October 23

In medicine, the cautionary tales about the unintended effects of artificial intelligence are already legendary.

There was the program meant to predict when patients would develop sepsis, a deadly bloodstream infection, that triggered a litany of false alarms. Another, intended to improve follow-up care for the sickest patients, appeared to deepen troubling health disparities.

Wary of such flaws, physicians have kept A.I. working on the sidelines: assisting as a scribe, as a casual second opinion and as a back-office organizer. But the field has gained investment and momentum for uses in medicine and beyond.

Within the Food and Drug Administration, which plays a key role in approving new medical products, A.I. is a hot topic. It is helping to discover new drugs. It could pinpoint unexpected side effects. And it is even being discussed as an aid to staff who are overwhelmed with repetitive, rote tasks.

Yet in one crucial way, the F.D.A.’s role has been subject to sharp criticism: how carefully it vets and describes the programs it approves to help doctors detect everything from tumors to blood clots to collapsed lungs.

“We’re going to have a lot of choices. It’s exciting,” Dr. Jesse Ehrenfeld, president of the American Medical Association, a leading doctors’ lobbying group, said in an interview. “But if physicians are going to incorporate these things into their workflow, if they’re going to pay for them and if they’re going to use them — we’re going to have to have some confidence that these tools work.”


My summary: 

This article delves into the growing integration of artificial intelligence (A.I.) in patient care, exploring the challenges and concerns raised by doctors regarding the perceived lack of oversight. The medical community is increasingly leveraging A.I. technologies to aid in diagnostics, treatment planning, and patient management. However, physicians express apprehension about the potential risks associated with the use of these technologies, emphasizing the need for comprehensive oversight and regulatory frameworks to ensure patient safety and uphold ethical standards. The article highlights the ongoing debate within the medical profession on striking a balance between harnessing the benefits of A.I. and addressing the associated uncertainties and risks.

Monday, January 1, 2024

Cyborg computer with living brain organoid aces machine learning tests

Loz Blain
New Atlas
Originally posted 12 DEC 23

Here are two excerpts:

Now, Indiana University researchers have taken a slightly different approach by growing a brain "organoid" and mounting that on a silicon chip. The difference might seem academic, but by allowing the stem cells to self-organize into a three-dimensional structure, the researchers hypothesized that the resulting organoid might be significantly smarter, that the neurons might exhibit more "complexity, connectivity, neuroplasticity and neurogenesis" if they were allowed to arrange themselves more like the way they normally do.

So they grew themselves a little brain ball organoid, less than a nanometer in diameter, and they mounted it on a high-density multi-electrode array – a chip that's able to send electrical signals into the brain organoid, as well as reading electrical signals that come out due to neural activity.

They called it "Brainoware" – which they probably meant as something adjacent to hardware and software, but which sounds far too close to "BrainAware" for my sensitive tastes, and evokes the perpetual nightmare of one of these things becoming fully sentient and understanding its fate.

(cut)

And finally, much like the team at Cortical Labs, this team really has no clear idea what to do about the ethics of creating micro-brains out of human neurons and wiring them into living cyborg computers. “As the sophistication of these organoid systems increases, it is critical for the community to examine the myriad of neuroethical issues that surround biocomputing systems incorporating human neural tissue," wrote the team. "It may be decades before general biocomputing systems can be created, but this research is likely to generate foundational insights into the mechanisms of learning, neural development and the cognitive implications of neurodegenerative diseases."


Here is my summary:

There is a new type of computer chip that uses living brain cells. The brain cells are grown from human stem cells and are organized into a ball-like structure called an organoid. The organoid is mounted on a chip that can send electrical signals to the brain cells and read the electrical signals that the brain cells produce. The researchers found that the organoid could learn to perform tasks such as speech recognition and math prediction much faster than traditional computers. They believe that this new type of computer chip could have many applications, such as in artificial intelligence and medical research. However, there are also some ethical concerns about using living brain cells in computers.

Sunday, December 31, 2023

Problems with the interjurisdictional regulation of psychological practice

Taube, D. O., Shapiro, D. L., et al. (2023).
Professional Psychology: Research and Practice,
54(6), 389–402.

Abstract

The U.S. Constitutional structure creates ethical conflicts for the cross-jurisdictional practice of professional psychology. The profession has chosen to seek interstate agreements to overcome such barriers, and such agreements now include almost 80% of American jurisdictions. Although an improvement over a patchwork of state laws regarding practice, the structure of this agreement and the exclusion of the remaining states continue to pose barriers to the principles of beneficence and nonmaleficence. It creates a system that is extraordinarily difficult to change and places an unrealistic burden on professionals to know, address, and act under complex legal mandates. As psychological services have moved increasingly to remote platforms, cross-jurisdictional business models, and a nationwide mental health crisis emerged alongside the pandemic, it is time to consider a national professional licensing system more seriously, both to further reduce barriers to care and complexity and permit the best interests of patients to prevail.

Impact Statement

Access to and the ability to continue receiving mental health care across jurisdictions and nations has become increasingly urgent in the wake of the COVID-19 pandemic This Ethics in Motion section highlights legal barriers to providing ethical care across jurisdictions, how those challenges developed, and strengths and limitations of current approaches and potential solutions.


My summary: 

The current system of interjurisdictional regulation of psychological practice in the United States is problematic because it creates ethical conflicts for psychologists and places an unrealistic burden on them to comply with complex legal mandates. The system is also extraordinarily difficult to change, and it excludes psychologists in states that have not joined the interstate agreement. As a result, the current system does not adequately protect the interests of patients.

A national professional licensing system would be a more effective way to regulate the practice of psychology across state lines. Such a system would eliminate the need for psychologists to comply with multiple state laws, and it would make it easier for them to provide care to patients who live in different states. A national system would also be more equitable, as it would ensure that all psychologists are held to the same standards.

Wednesday, December 27, 2023

This algorithm could predict your health, income, and chance of premature death

Holly Barker
Science.org
Originally published 18 DEC 23

Here is an excerpt:

The researchers trained the model, called “life2vec,” on every individual’s life story between 2008 to 2016, and the model sought patterns in these stories. Next, they used the algorithm to predict whether someone on the Danish national registers had died by 2020.

The model’s predictions were accurate 78% of the time. It identified several factors that favored a greater risk of premature death, including having a low income, having a mental health diagnosis, and being male. The model’s misses were typically caused by accidents or heart attacks, which are difficult to predict.

Although the results are intriguing—if a bit grim—some scientists caution that the patterns might not hold true for non-Danish populations. “It would be fascinating to see the model adapted using cohort data from other countries, potentially unveiling universal patterns, or highlighting unique cultural nuances,” says Youyou Wu, a psychologist at University College London.

Biases in the data could also confound its predictions, she adds. (The overdiagnosis of schizophrenia among Black people could cause algorithms to mistakenly label them at a higher risk of premature death, for example.) That could have ramifications for things such as insurance premiums or hiring decisions, Wu adds.


Here is my summary:

A new algorithm, trained on a mountain of Danish life stories, can peer into your future with unsettling precision. It can predict your health, income, and even your odds of an early demise. This, achieved by analyzing the sequence of life events, like getting a job or falling ill, raises both possibilities and ethical concerns.

On one hand, imagine the potential for good: nudges towards healthier habits or financial foresight, tailored to your personal narrative. On the other, anxieties around bias and discrimination loom. We must ensure this powerful tool is used wisely, for the benefit of all, lest it exacerbate existing inequalities or create new ones. The algorithm’s gaze into the future, while remarkable, is just that – a glimpse, not a script. 

Friday, December 15, 2023

Clinical documentation of patient identities in the electronic health record: Ethical principles to consider

Decker, S. E., et al. (2023). 
Psychological Services.
Advance online publication.

Abstract

The American Psychological Association’s multicultural guidelines encourage psychologists to use language sensitive to the lived experiences of the individuals they serve. In organized care settings, psychologists have important decisions to make about the language they use in the electronic health record (EHR), which may be accessible to both the patient and other health care providers. Language about patient identities (including but not limited to race, ethnicity, gender, and sexual orientation) is especially important, but little guidance exists for psychologists on how and when to document these identities in the EHR. Moreover, organizational mandates, patient preferences, fluid identities, and shifting language may suggest different documentation approaches, posing ethical dilemmas for psychologists to navigate. In this article, we review the purposes of documentation in organized care settings, review how each of the five American Psychological Association Code of Ethics’ General Principles relates to identity language in EHR documentation, and propose a set of questions for psychologists to ask themselves and their patients when making choices about documenting identity variables in the EHR.

Impact Statement

Psychologists in organized care settings may face ethical dilemmas about what language to use when documenting patient identities (race, ethnicity, gender, sexual orientation, and so on) in the electronic health record. This article provides a framework for considering how to navigate these decisions based on the American Psychological Association Code of Ethics’ five General Principles. To guide psychologists in decision making, questions to ask self and patient are included, as well as suggestions for further study.

Here is my summary:

The authors emphasize the lack of clear guidelines for psychologists on how and when to document these identity variables in EHRs. They acknowledge the complexities arising from organizational mandates, patient preferences, fluid identities, and evolving language, which can lead to ethical dilemmas for psychologists.

To address these challenges, the article proposes a framework based on the five General Principles of the American Psychological Association (APA) Code of Ethics:
  1. Fidelity and Responsibility: Psychologists must prioritize patient welfare and act in their best interests. This includes respecting their privacy and self-determination when documenting identity variables.
  2. Competence: Psychologists should possess the necessary knowledge and skills to accurately and sensitively document patient identities. This may involve ongoing training and staying abreast of evolving language and cultural norms.
  3. Integrity: Psychologists must maintain ethical standards and avoid misrepresenting or misusing patient identity information. This includes being transparent about the purposes of documentation and seeking patient consent when appropriate.
  4. Respect for Human Rights and Dignity: Psychologists must respect the inherent dignity and worth of all individuals, regardless of their identity. This includes avoiding discriminatory or stigmatizing language in EHR documentation.
  5. Social Justice and Public Interest: Psychologists should contribute to the promotion of social justice and the elimination of discrimination. This includes being mindful of how identity documentation can impact patients' access to care and their overall well-being.
To aid psychologists in making informed decisions about identity documentation, the authors propose a set of questions to consider:
  1. What is the purpose of documenting this identity variable?
  2. Is this information necessary for providing appropriate care or fulfilling legal/regulatory requirements?
  3. How will this information be used?
  4. What are the potential risks and benefits of documenting this information?
  5. What are the patient's preferences regarding the documentation of their identity?
By carefully considering these questions, psychologists can make ethically sound decisions that protect patient privacy and promote their well-being.

Wednesday, December 13, 2023

Science and Ethics of “Curing” Misinformation

Freiling, I., Knause, N.M., & Scheufele, D.A.
AMA J Ethics. 2023;25(3):E228-237. 

Abstract

A growing chorus of academicians, public health officials, and other science communicators have warned of what they see as an ill-informed public making poor personal or electoral decisions. Misinformation is often seen as an urgent new problem, so some members of these communities have pushed for quick but untested solutions without carefully diagnosing ethical pitfalls of rushed interventions. This article argues that attempts to “cure” public opinion that are inconsistent with best available social science evidence not only leave the scientific community vulnerable to long-term reputational damage but also raise significant ethical questions. It also suggests strategies for communicating science and health information equitably, effectively, and ethically to audiences affected by it without undermining affected audiences’ agency over what to do with it.

My summary:

The authors explore the challenges and ethical considerations surrounding efforts to combat misinformation. The authors argue that using the term "curing" to describe these efforts is problematic, as it suggests that misinformation is a disease that can be eradicated. They argue that this approach is overly simplistic and disregards the complex social and psychological factors that contribute to the spread of misinformation.

The authors identify several ethical concerns with current approaches to combating misinformation, including:
  • The potential for censorship and suppression of legitimate dissent.
  • The undermining of public trust in science and expertise.
  • The creation of echo chambers and further polarization of public opinion.
Instead of trying to "cure" misinformation, the authors propose a more nuanced and ethical approach that focuses on promoting critical thinking, media literacy, and civic engagement. They also emphasize the importance of addressing the underlying social and psychological factors that contribute to the spread of misinformation, such as social isolation, distrust of authority, and a desire for simple explanations.

Tuesday, December 12, 2023

Health Insurers Have Been Breaking State Laws for Years

Maya Miller and Robin Fields
ProPublic.org
Originally published 16, NOV 23

Here is an excerpt:

State insurance departments are responsible for enforcing these laws, but many are ill-equipped to do so, researchers, consumer advocates and even some regulators say. These agencies oversee all types of insurance, including plans covering cars, homes and people’s health. Yet they employed less people last year than they did a decade ago. Their first priority is making sure plans remain solvent; protecting consumers from unlawful denials often takes a backseat.

“They just honestly don’t have the resources to do the type of auditing that we would need,” said Sara McMenamin, an associate professor of public health at the University of California, San Diego, who has been studying the implementation of state mandates.

Agencies often don’t investigate health insurance denials unless policyholders or their families complain. But denials can arrive at the worst moments of people’s lives, when they have little energy to wrangle with bureaucracy. People with plans purchased on HealthCare.gov appealed less than 1% of the time, one study found.

ProPublica surveyed every state’s insurance agency and identified just 45 enforcement actions since 2018 involving denials that have violated coverage mandates. Regulators sometimes treat consumer complaints as one-offs, forcing an insurer to pay for that individual’s treatment without addressing whether a broader group has faced similar wrongful denials.

When regulators have decided to dig deeper, they’ve found that a single complaint is emblematic of a systemic issue impacting thousands of people.

In 2017, a woman complained to Maine’s insurance regulator, saying her carrier, Aetna, broke state law by incorrectly processing claims and overcharging her for services related to the birth of her child. After being contacted by the state, Aetna acknowledged the mistake and issued a refund.


Here's my take:

The article explores the ethical issues surrounding health insurance denials and the violation of state laws. The investigation reveals a pattern of health insurance companies systematically denying coverage for medically necessary treatments, even when such denials directly contravene state laws designed to protect patients. The unethical practices extend to various states, indicating a systemic problem within the industry. Patients are often left in precarious situations, facing financial burdens and health risks due to the denial of essential medical services, raising questions about the industry's commitment to prioritizing patient well-being over profit margins.

The article underscores the need for increased regulatory scrutiny and enforcement to hold health insurance companies accountable for violating state laws and compromising patient care. It highlights the ethical imperative for insurers to prioritize their fundamental responsibility to provide coverage for necessary medical treatments and adhere to the legal frameworks in place to safeguard patient rights. The investigation sheds light on the intersection of profit motives and ethical considerations within the health insurance industry, emphasizing the urgency of addressing these systemic issues to ensure that patients receive the care they require without undue financial or health-related consequences.

Saturday, December 9, 2023

Physicians’ Refusal to Wear Masks to Protect Vulnerable Patients—An Ethical Dilemma for the Medical Profession

Dorfman D, Raz M, Berger Z.
JAMA Health Forum. 2023;4(11):e233780.
doi:10.1001/jamahealthforum.2023.3780

Here is an excerpt:

In theory, the solution to the problem should be simple: patients who wear masks to protect themselves, as recommended by the CDC, can ask the staff and clinicians to wear a mask as well when seeing them, and the clinicians would oblige given the efficacy masks have shown in reducing the spread of respiratory illnesses. However, disabled patients report physicians and other clinical staff having refused to wear a mask when caring for them. Although it is hard to know how prevalent this phenomenon is, what recourse do patients have? How should health care systems approach clinicians and staff who refuse to mask when treating a disabled patient?

Physicians have a history of antagonism to the idea that they themselves might present a health risk to their patients. Famously, when Hungarian physician Ignaz Semmelweis originally proposed handwashing as a measure to reduce purpureal fever, he was met with ridicule and ostracized from the profession.

Physicians were also historically reluctant to adopt new practices to protect not only patients but also physicians themselves against infection in the midst of the AIDS epidemic. In 1985, the CDC presented its guidance on workplace transmission, instructing physicians to provide care, “regardless of whether HCWs [health care workers] or patients are known to be infected with HTLV-III/LAV [human T-lymphotropic virus type III/lymphadenopathy-associated virus] or HBV [hepatitis B virus].” These CDC guidelines offered universal precautions, common-sense, nonstigmatizing, standardized methods to reduce infection. Yet, some physicians bristled at the idea that they need to take simple, universal public health steps to prevent transmission, even in cases in which infectivity is unknown, and instead advocated for a medicalized approach: testing or masking only in cases when a patient is known to be infected. Such an individualized medicalized approach fails to meet the public health needs of the moment.

Patients are the ones who pay the price for physicians’ objections to changes in practices, whether it is handwashing or the denial of care as an unwarranted HIV precaution. Yet today, with the enactment of disability antidiscrimination law, patients are protected, at least on the books.

As we have written elsewhere, federal law supports the right of a disabled individual to request masking as a reasonable disability accommodation in the workplace and at schools.


Here is my summary:

This article explores the ethical dilemma arising from physicians refusing to wear masks, potentially jeopardizing the protection of vulnerable patients. The author delves into the conflict between personal beliefs and professional responsibilities, questioning the ethical implications of such refusals within the medical profession. The analysis emphasizes the importance of prioritizing patient well-being and public health over individual preferences, calling for a balance between personal freedoms and ethical obligations in healthcare settings.

Tuesday, November 28, 2023

Ethics of psychotherapy rationing: A review of ethical and regulatory documents in Canadian professional psychology

Gower, H. K., & Gaine, G. S. (2023).
Canadian Psychology / Psychologie canadienne. 
Advance online publication.

Abstract

Ethical and regulatory documents in Canadian professional psychology were reviewed for principles and standards related to the rationing of psychotherapy. Despite Canada’s high per capita health care expenses, mental health in Canada receives relatively low funding. Further, surveys indicated that Canadians have unmet needs for psychotherapy. Effective and ethical rationing of psychological treatment is a necessity, yet the topic of rationing in psychology has received scant attention. The present study involved a qualitative review of codes of ethics, codes of conduct, and standards of practice documents for their inclusion of rationing principles and standards. Findings highlight the strengths and shortcomings of these documents related to guiding psychotherapy rationing. The discussion offers recommendations for revising these ethical and regulatory documents to promote more equitable and cost-effective use of limited psychotherapy resources in Canada.

Impact Statement

Canadian professional psychology regulatory documents contain limited reference to rationing imperatives, despite scarce psychotherapy resources. While the foundation of distributive justice is in place, rationing-specific principles, standards, and practices are required to foster the fair and equitable distribution of psychotherapy by Canadian psychologists.

From the recommendations:

Recommendations for Canadian Psychology Regulatory Documents
  1. Explicitly widen psychologists’ scope of concern to include not only current clients but also waiting clients and those who need treatment but face access barriers.
  2. Acknowledge the scarcity of health care resources (in public and private settings) and the high demand for psychology services (e.g., psychotherapy) and admonish inefficient and cost-ineffective use.
  3. Draw an explicit connection between the general principle of distributive justice and the specific practices related to rationing of psychology resources, including, especially, mitigation of biases likely to weaken ethical decision making.
  4. Encourage the use of outcome monitoring measures to aid relative utility calculations for triage and termination decisions and to ensure efficiency and distributive justice.
  5. Recommend advocacy by psychologists to address barriers to accessing needed services (e.g., psychotherapy), including promoting the cost effectiveness of psychotherapy as well as highlighting systemic barriers related to presenting problem, disability, ethnicity, race, gender, sexuality, or income.

Monday, November 27, 2023

Synthetic human embryos created in groundbreaking advance

Hannah Devlin
The Guardian
Originally posted 14 JUNE 23

Here is an excerpt:

“Our human model is the first three-lineage human embryo model that specifies amnion and germ cells, precursor cells of egg and sperm,” Å»ernicka-Goetz told the Guardian before the talk. “It’s beautiful and created entirely from embryonic stem cells.”

The development highlights how rapidly the science in this field has outpaced the law, and scientists in the UK and elsewhere are already moving to draw up voluntary guidelines to govern work on synthetic embryos. “If the whole intention is that these models are very much like normal embryos, then in a way they should be treated the same,” Lovell-Badge said. “Currently in legislation they’re not. People are worried about this.”

There is also a significant unanswered question on whether these structures, in theory, have the potential to grow into a living creature. The synthetic embryos grown from mouse cells were reported to appear almost identical to natural embryos. But when they were implanted into the wombs of female mice, they did not develop into live animals. In April, researchers in China created synthetic embryos from monkey cells and implanted them into the wombs of adult monkeys, a few of which showed the initial signs of pregnancy but none of which continued to develop beyond a few days. Scientists say it is not clear whether the barrier to more advanced development is merely technical or has a more fundamental biological cause.


Here is my summary:

Researchers used stem cells to create structures that resembled early-stage human embryos, with a beating heart and primitive brain-like structures.

The synthetic embryos could be used to study human development and to develop new treatments for infertility and miscarriage. However, the research also raises ethical concerns, as it is not clear whether the synthetic embryos should be considered the same as natural embryos.

Some bioethicists have argued that the synthetic embryos should be treated with the same respect as natural embryos, as they have the potential to develop into human beings. Others have argued that the synthetic embryos are not the same as natural embryos, as they were not created through the union of an egg and sperm.

The research has been welcomed by some scientists, who believe it has the potential to revolutionize our understanding of human development. However, other scientists have expressed concern about the ethical implications of the research.

Sunday, November 26, 2023

How robots can learn to follow a moral code

Neil Savage
Nature.com
Originally posted 26 OCT 23

Here is an excerpt:

Defining ethics

The ability to fine-tune an AI system’s behaviour to promote certain values has inevitably led to debates on who gets to play the moral arbiter. Vosoughi suggests that his work could be used to allow societies to tune models to their own taste — if a community provides examples of its moral and ethical values, then with these techniques it could develop an LLM more aligned with those values, he says. However, he is well aware of the possibility for the technology to be used for harm. “If it becomes a free for all, then you’d be competing with bad actors trying to use our technology to push antisocial views,” he says.

Precisely what constitutes an antisocial view or unethical behaviour, however, isn’t always easy to define. Although there is widespread agreement about many moral and ethical issues — the idea that your car shouldn’t run someone over is pretty universal — on other topics there is strong disagreement, such as abortion. Even seemingly simple issues, such as the idea that you shouldn’t jump a queue, can be more nuanced than is immediately obvious, says Sydney Levine, a cognitive scientist at the Allen Institute. If a person has already been served at a deli counter but drops their spoon while walking away, most people would agree it’s okay to go back for a new one without waiting in line again, so the rule ‘don’t cut the line’ is too simple.

One potential approach for dealing with differing opinions on moral issues is what Levine calls a moral parliament. “This problem of who gets to decide is not just a problem for AI. It’s a problem for governance of a society,” she says. “We’re looking to ideas from governance to help us think through these AI problems.” Similar to a political assembly or parliament, she suggests representing multiple different views in an AI system. “We can have algorithmic representations of different moral positions,” she says. The system would then attempt to calculate what the likely consensus would be on a given issue, based on a concept from game theory called cooperative bargaining.


Here is my summary:

Autonomous robots will need to be able to make ethical decisions in order to safely and effectively interact with humans and the world around them.

The article proposes a number of ways that robots can be taught to follow a moral code. One approach is to use supervised learning, in which robots are trained on a dataset of moral dilemmas and their corresponding solutions. Another approach is to use reinforcement learning, in which robots are rewarded for making ethical decisions and punished for making unethical decisions.

The article also discusses the challenges of teaching robots to follow a moral code. One challenge is that moral codes are often complex and nuanced, and it can be difficult to define them in a way that can be understood by a robot. Another challenge is that moral codes can vary across cultures, and it is important to develop robots that can adapt to different moral frameworks.

The article concludes by arguing that teaching robots to follow a moral code is an important ethical challenge that we need to address as we develop more sophisticated artificial intelligence systems.

Friday, November 24, 2023

UnitedHealth faces class action lawsuit over algorithmic care denials in Medicare Advantage plans

Casey Ross and Bob Herman
Statnews.com
Originally posted 14 Nov 23

A class action lawsuit was filed Tuesday against UnitedHealth Group and a subsidiary alleging that they are illegally using an algorithm to deny rehabilitation care to seriously ill patients, even though the companies know the algorithm has a high error rate.

The class action suit, filed on behalf of deceased patients who had a UnitedHealthcare Medicare Advantage plan and their families by the California-based Clarkson Law Firm, follows the publication of a STAT investigation Tuesday. The investigation, cited by the lawsuit, found UnitedHealth pressured medical employees to follow an algorithm, which predicts a patient’s length of stay, to issue payment denials to people with Medicare Advantage plans. Internal documents revealed that managers within the company set a goal for clinical employees to keep patients rehab stays within 1% of the days projected by the algorithm.

The lawsuit, filed in the U.S. District Court of Minnesota, accuses UnitedHealth and its subsidiary, NaviHealth, of using the computer algorithm to “systematically deny claims” of Medicare beneficiaries struggling to recover from debilitating illnesses in nursing homes. The suit also cites STAT’s previous reporting on the issue.

“The fraudulent scheme affords defendants a clear financial windfall in the form of policy premiums without having to pay for promised care,” the complaint alleges. “The elderly are prematurely kicked out of care facilities nationwide or forced to deplete family savings to continue receiving necessary care, all because an [artificial intelligence] model ‘disagrees’ with their real live doctors’ recommendations.”


Here are some of my concerns:

The use of algorithms in healthcare decision-making has raised a number of ethical concerns. Some critics argue that algorithms can be biased and discriminatory, and that they can lead to decisions that are not in the best interests of patients. Others argue that algorithms can lack transparency, and that they can make it difficult for patients to understand how decisions are being made.

The lawsuit against UnitedHealth raises a number of specific ethical concerns. First, the plaintiffs allege that UnitedHealth's algorithm is based on inaccurate and incomplete data. This raises the concern that the algorithm may be making decisions that are not based on sound medical evidence. Second, the plaintiffs allege that UnitedHealth has failed to adequately train its employees on how to use the algorithm. This raises the concern that employees may be making decisions that are not in the best interests of patients, either because they do not understand how the algorithm works or because they are pressured to deny claims.

The lawsuit also raises the general question of whether algorithms should be used to make healthcare decisions. Some argue that algorithms can be used to make more efficient and objective decisions than humans can. Others argue that algorithms are not capable of making complex medical decisions that require an understanding of the individual patient's circumstances.

The use of algorithms in healthcare is a complex issue with no easy answers. It is important to carefully consider the potential benefits and risks of using algorithms before implementing them in healthcare settings.

Saturday, November 18, 2023

Resolving the battle of short- vs. long-term AI risks

Sætra, H.S., Danaher, J.
AI Ethics (2023).

Abstract

AI poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to think two thoughts at the same time. While disagreements over the exact probabilities and impacts of risks will remain, fostering a more productive dialogue will be important. This entails, for example, distinguishing between evaluations of particular risks and the politics of risk. Without proper discussions of AI risk, it will be difficult to properly manage them, and we could end up in a situation where neither short- nor long-term risks are managed and mitigated.


Here is my summary:

Artificial intelligence (AI) poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to prioritize these risks. Some argue that short-term risks, such as bias and discrimination, are more pressing and should be addressed first, while others argue that long-term risks, such as the possibility of AI surpassing human intelligence and becoming uncontrollable, are more serious and should be prioritized.

Sætra and Danaher argue that it is important to consider both short- and long-term risks when developing AI policies and regulations. They point out that short-term risks can have long-term consequences, and that long-term risks can have short-term impacts. For example, if AI is biased against certain groups of people, this could lead to long-term inequality and injustice. Conversely, if we take steps to mitigate long-term risks, such as by developing safety standards for AI systems, this could also reduce short-term risks.

Sætra and Danaher offer a number of suggestions for how to better balance short- and long-term AI risks. One suggestion is to develop a risk matrix that categorizes risks by their impact and likelihood. This could help policymakers to identify and prioritize the most important risks. Another suggestion is to create a research agenda that addresses both short- and long-term risks. This would help to ensure that we are investing in the research that is most needed to keep AI safe and beneficial.

Wednesday, November 8, 2023

Everything you need to know about artificial wombs

Cassandra Willyard
MIT Technology Review
Originally posted 29 SEPT 23

Here is an excerpt:

What is an artificial womb?

An artificial womb is an experimental medical device intended to provide a womblike environment for extremely premature infants. In most of the technologies, the infant would float in a clear “biobag,” surrounded by fluid. The idea is that preemies could spend a few weeks continuing to develop in this device after birth, so that “when they’re transitioned from the device, they’re more capable of surviving and having fewer complications with conventional treatment,” says George Mychaliska, a pediatric surgeon at the University of Michigan.

One of the main limiting factors for survival in extremely premature babies is lung development. Rather than breathing air, babies in an artificial womb would have their lungs filled with lab-made amniotic fluid, that mimics the amniotic fluid they would have hadjust like they would in utero. Neonatologists would insert tubes into blood vessels in the umbilical cord so that the infant’s blood could cycle through an artificial lung to pick up oxygen. 

The device closest to being ready to be tested in humans, called the EXTrauterine Environment for Newborn Development, or EXTEND, encases the baby in a container filled with lab-made amniotic fluid. It was invented by Alan Flake and Marcus Davey at the Children’s Hospital of Philadelphia and is being developed by Vitara Biomedical.


Here is my take:

Artificial wombs are experimental medical devices that aim to provide a womb-like environment for extremely premature infants. The technology is still in its early stages of development, but it has the potential to save the lives of many babies who would otherwise not survive.

Overall, artificial wombs are a promising new technology with the potential to revolutionize the care of premature infants. However, more research is needed to fully understand the risks and benefits of the technology before it can be widely used.

Here are some additional ethical concerns that have been raised about artificial wombs:
  • The potential for artificial wombs to be used to create designer babies or to prolong the lives of fetuses with severe disabilities.
  • The potential for artificial wombs to be used to exploit or traffick babies.
  • The potential for artificial wombs to exacerbate existing social and economic inequalities.
It is important to have a public conversation about these ethical concerns before artificial wombs become widely available. We need to develop clear guidelines for how the technology should be used and ensure that it is used in a way that benefits all of society.

Friday, November 3, 2023

Posthumanism’s Revolt Against Responsibility

Nolen Gertz
Commonweal Magazine
Originally published 31 Oct 23

Here is an excerpt:

A major problem with this view—one Kirsch neglects—is that it conflates the destructiveness of particular humans with the destructiveness of humanity in general. Acknowledging that climate change is driven by human activity should not prevent us from identifying precisely which humans and activities are to blame. Plenty of people are concerned about climate change and have altered their behavior by, for example, using public transportation, recycling, or being more conscious about what they buy. Yet this individual behavior change is not sufficient because climate change is driven by the large-scale behavior of corporations and governments.

In other words, it is somewhat misleading to say we have entered the “Anthropocene” because anthropos is not as a whole to blame for climate change. Rather, in order to place the blame where it truly belongs, it would be more appropriate—as Jason W. Moore, Donna J. Haraway, and others have argued—to say we have entered the “Capitalocene.” Blaming humanity in general for climate change excuses those particular individuals and groups actually responsible. To put it another way, to see everyone as responsible is to see no one as responsible. Anthropocene antihumanism is thus a public-relations victory for the corporations and governments destroying the planet. They can maintain business as usual on the pretense that human nature itself is to blame for climate change and that there is little or nothing corporations or governments can or should do to stop it, since, after all, they’re only human.

Kirsch does not address these straightforward criticisms of Anthropocene antihumanism. This throws into doubt his claim that he is cataloguing their views to judge whether they are convincing and to explore their likely impact. Kirsch does briefly bring up the activist Greta Thunberg as a potential opponent of the nihilistic antihumanists, but he doesn’t consider her challenge in depth. 


Here is my summary:

Anthropocene antihumanism is a pessimistic view that sees humanity as a destructive force on the planet. It argues that humans have caused climate change, mass extinctions, and other environmental problems, and that we are ultimately incapable of living in harmony with nature. Some Anthropocene antihumanists believe that humanity should go extinct, while others believe that we should radically change our way of life in order to avoid destroying ourselves and the planet.

Some bullets
  • Posthumanism is a broad philosophical movement that challenges the traditional view of what it means to be human.
  • Anthropocene antihumanism and transhumanism are two strands of posthumanism that share a common theme of revolt against responsibility.
  • Anthropocene antihumanists believe that humanity is so destructive that it is beyond redemption, and that we should therefore either go extinct or give up our responsibility to manage the planet.
  • Transhumanists believe that we can transcend our human limitations and create a new, posthuman species that is not bound by the same moral and ethical constraints as humans.
  • Kirsch argues that this revolt against responsibility is a dangerous trend, and that we should instead work to create a more sustainable and just future for all.