Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Nonmaleficence. Show all posts
Showing posts with label Nonmaleficence. Show all posts

Monday, March 4, 2024

How to Deal with Counter-Examples to Common Morality Theory: A Surprising Result

Herissone-Kelly P.
Cambridge Quarterly of Healthcare Ethics.
2022;31(2):185-191.
doi:10.1017/S096318012100058X

Abstract

Tom Beauchamp and James Childress are confident that their four principles—respect for autonomy, beneficence, non-maleficence, and justice—are globally applicable to the sorts of issues that arise in biomedical ethics, in part because those principles form part of the common morality (a set of general norms to which all morally committed persons subscribe). Inevitably, however, the question arises of how the principlist ought to respond when presented with apparent counter-examples to this thesis. I examine a number of strategies the principlist might adopt in order to retain common morality theory in the face of supposed counter-examples. I conclude that only a strategy that takes a non-realist view of the common morality’s principles is viable. Unfortunately, such a view is likely not to appeal to the principlist.


Herissone-Kelly examines various strategies principlism could employ to address counter-examples:

Refine the principles: This involves clarifying or reinterpreting the principles to better handle specific cases.
  • Prioritize principles: Establish a hierarchy among the principles to resolve conflicts.
  • Supplement the principles: Introduce additional considerations or context-specific factors.
  • Limit the scope: Acknowledge that the principles may not apply universally to all cultures or situations.
Herissone-Kelly argues that none of these strategies are fully satisfactory. Refining or prioritizing principles risks distorting their original meaning or introducing arbitrariness. Supplementing them can lead to an unwieldy and complex framework. Limiting their scope undermines the theory's claim to universality.

He concludes that the most viable approach is to adopt a non-realist view of the common morality's principles. This means understanding them not as objective moral facts but as flexible tools for ethical reflection and deliberation, open to interpretation and adaptation in different contexts. While this may seem to weaken the theory's authority, Herissone-Kelly argues that it allows for a more nuanced and practical application of ethical principles in a diverse world.

Monday, August 7, 2023

Shake-up at top psychiatric institute following suicide in clinical trial

Brendan Borrell
Spectrum News
Originally posted 31 July 23

Here are two excerpts:

The audit and turnover in leadership comes after the halting of a series of clinical trials conducted by Columbia psychiatrist Bret Rutherford, which tested whether the drug levodopa — typically used to treat Parkinson’s disease — could improve mood and mobility in adults with depression.

During a double-blind study that began in 2019, a participant in the placebo group died by suicide. That study was suspended prior to completion, according to an update posted on ClinicalTrials.gov in 2022.

Two published reports based on Rutherford’s pilot studies have since been retracted, as Spectrum has previously reported. The National Institute of Mental Health has terminated Rutherford’s trials and did not renew funding of his research grant or K24 Midcareer Award.

Former members of Rutherford’s laboratory describe it as a high-pressure environment that often put publications ahead of study participants. “Research is important, but not more so than the lives of those who participate in it,” says Kaleigh O’Boyle, who served as clinical research coordinator there from 2018 to 2020.

Although Rutherford’s faculty page is still active, he is no longer listed in the directory at Columbia University, where he was associate professor, and the voicemail at his former number says he is no longer checking it. He did not respond to voicemails and text messages sent to his personal phone or to emails sent to his Columbia email address, and Cantor would not comment on his employment status.

The circumstances around the suicide remain unclear, and the institute has previously declined to comment on Rutherford’s retractions. Veenstra-VanderWeele confirmed that he is the new director but did not respond to further questions about the situation.

(cut)

In January 2022, the study was temporarily suspended by the U.S. National Institute of Mental Health, following the suicide. It is unknown whether that participant had been taking any antidepressant medication prior to the study.

Four of Rutherford’s published studies were subsequently retracted or corrected for issues related to how participants taking antidepressants at enrollment were handled.

One retraction notice published in February indicates tapering could be challenging and that the researchers did not always stick to the protocol. One-third of the participants taking antidepressants were unable to successfully taper off of them.


Note: The article serves as a cautionary tale about the risks of clinical trials. While clinical trials can be a valuable way to test new drugs and treatments, they also carry risks. Participants in clinical trials may be exposed to experimental drugs that have not been fully tested, and they may experience side effects that are not well-understood.  Ethical researchers must follow guidelines and report accurate results.

Saturday, October 15, 2022

Boundary Issues of Concern

Charles Dike
Psychiatric News
Originally posted 25 AUG 22

Here is an excerpt:

There are, of course, less prominent but equally serious boundary violations other than sexual relations with patients or a patients’ relatives. The case of Dr. Jerome Oremland, a prominent California psychiatrist, is one example. According to a report by KQED on October 3, 2016, John Pierce, a patient, alleged that his psychiatrist, Dr. Oremland, induced Mr. Pierce to give him at least 12 works of highly valued art. The psychiatrist argued that the patient had consented to their business dealings and that the art he had received from the patient was given willingly as payment for psychiatric treatment. The patient further alleged that Dr. Oremland used many of their sessions to solicit art, propose financial schemes (including investments), and discuss other subjects unrelated to treatment. Furthermore, the patient allegedly made repairs in Dr. Oremland’s home, offices, and rental units; helped clear out the home of Dr. Oremland’s deceased brother; and cleaned his pool. Mr. Pierce began therapy with Dr. Oremland in 1984 but brought a lawsuit against him in 2015. The court trial began shortly after Dr. Oremland’s death in 2016, and Dr. Oremland’s estate eventually settled with Mr. Pierce. In addition to being a private practitioner, Dr. Oremland had been chief of psychiatry at the Children’s Hospital in San Francisco and a clinical professor of psychiatry at UCSF. He also wrote books on the intersection of art and psychology.

(cut)

There are less dramatic but still problematic boundary crossings such as when a psychiatrist in private practice agrees that a patient may pay off treatment costs by doing some work for the psychiatrist. Other examples include a psychiatrist hiring a patient, for example, a skilled plumber, to work in the psychiatrist’s office or home at the patient’s going rate or obtaining investment tips from a successful investment banker patient. In these situations, questions arise about the physician-patient relationship. Even when the psychiatrist believes he or she is treating the patient fairly—such as paying the going rate for work done for the psychiatrist—the psychiatrist is clueless regarding how the patient is interpreting the arrangement: Does the patient experience it as exploitative? What are the patient’s unspoken expectations? What if the patient’s work in the psychiatrist’s office is inferior or the investment advice results in a loss? Would these outcomes influence the physician-patient relationship? Even compassionate acts such as writing off the bill of patients who are unable to pay or paying for an indigent patient’s medications should make the psychiatrist pause for thought. To avoid potential misinterpretation of the psychiatrist’s intentions or complaints of inequitable practices or favoritism, the psychiatrist should be ready to do the same for other indigent patients. It would be better to establish neutral policies for all indigent patients than to appear to favor some over others.

Thursday, June 23, 2022

Thousands of Medical Professionals Urge Supreme Court To Uphold Roe: ‘Provide Patients With the Treatment They Need’

Phoebe Kolbert
Ms. Magazine
Originally posted 21 JUN 22

Any day now, the Supreme Court will issue its decision in Dobbs v. Jackson Women’s Health Organization, which many predict will overturn or severely gut Roe v. Wade. Since the start of the Dobbs v. Jackson hearings in December, medical professionals have warned of the drastic health impacts brought on by abortion bans. Now, over 2,500 healthcare professionals from all 50 states have signed a letter urging the Supreme Court to scrap their leaked Dobbs draft opinion and uphold Roe.  

Within 30 days of a decision to overturn Roe, at least 26 states will ban abortion. Clinics in remaining pro-abortion states are preparing for increased violence from anti-abortion extremists and an influx of out-of-state patients. The number of legal abortions performed nationwide is projected to fall by about 13 percent. Many abortion clinics in states with bans will be forced to close their doors, if they haven’t already. The loss of these clinics also comes with the loss of the other essential reproductive healthcare they provide, including STI screenings and treatment, birth control and cervical cancer screenings.

The letter, titled “Medical Professionals Urge Supreme Court to Uphold Roe v. Wade, Protect Abortion Access,” argues that decisions around pregnancy and abortion should be made by patients and their doctors, not the courts.


Here is how the letter begins:

Medical Professionals Urge Supreme Court to Uphold Roe v. Wade, Protect Abortion Access

As physicians and health care professionals, we are gravely concerned that the U.S. Supreme Court appears prepared to end the constitutional right to an abortion. We urge the Supreme Court to to scrap their draft opinion, uphold the constitutional right to an abortion, and ensure that abortions remain legal nationwide, as allowed for in Roe v. Wade. In this moment of crisis, we want to make crystal clear the consequences to our patients’ health if they can no longer access abortions.

Abortions are safe, common and a critical part of health care and reproductive medicine. Medical professionals and medical associations agree, including the American Medical Association, the American College of Obstetricians and Gynecologists, the American Academy of Family Physicians, the American College of Nurse Midwives and many others.

Prohibiting access to safe and legal abortion has devastating implications for health care. Striking down Roe v. Wade would affect not just abortion access, but also maternal care as well as fertility treatments. Pregnancy changes a person’s physiology. These changes can potentially worsen existing diseases and medical conditions.

As physicians and medical professionals, we see the real-life consequences when an individual does not get the care that they know they need, including abortions. The woman who has suffered the violation and trauma of rape would be forced to carry a pregnancy.

Denying access to abortion from people who want one can adversely affect their health, safety and economic well-being, including delayed separation from a violent partner and increased likelihood of falling into poverty by four times. These outcomes can also have drastic impacts on their health.

Saturday, February 26, 2022

Experts Are Ringing Alarms About Elon Musk’s Brain Implants

Noah Kirsch
Daily Beast
Posted 25 Jan 2021

Here is an excerpt:

“These are very niche products—if we’re really only talking about developing them for paralyzed individuals—the market is small, the devices are expensive,” said Dr. L. Syd Johnson, an associate professor in the Center for Bioethics and Humanities at SUNY Upstate Medical University.

“If the ultimate goal is to use the acquired brain data for other devices, or use these devices for other things—say, to drive cars, to drive Teslas—then there might be a much, much bigger market,” she said. “But then all those human research subjects—people with genuine needs—are being exploited and used in risky research for someone else’s commercial gain.”

In interviews with The Daily Beast, a number of scientists and academics expressed cautious hope that Neuralink will responsibly deliver a new therapy for patients, though each also outlined significant moral quandaries that Musk and company have yet to fully address.

Say, for instance, a clinical trial participant changes their mind and wants out of the study, or develops undesirable complications. “What I’ve seen in the field is we’re really good at implanting [the devices],” said Dr. Laura Cabrera, who researches neuroethics at Penn State. “But if something goes wrong, we really don't have the technology to explant them” and remove them safely without inflicting damage to the brain.

There are also concerns about “the rigor of the scrutiny” from the board that will oversee Neuralink’s trials, said Dr. Kreitmair, noting that some institutional review boards “have a track record of being maybe a little mired in conflicts of interest.” She hoped that the high-profile nature of Neuralink’s work will ensure that they have “a lot of their T’s crossed.”

The academics detailed additional unanswered questions: What happens if Neuralink goes bankrupt after patients already have devices in their brains? Who gets to control users’ brain activity data? What happens to that data if the company is sold, particularly to a foreign entity? How long will the implantable devices last, and will Neuralink cover upgrades for the study participants whether or not the trials succeed?

Dr. Johnson, of SUNY Upstate, questioned whether the startup’s scientific capabilities justify its hype. “If Neuralink is claiming that they’ll be able to use their device therapeutically to help disabled persons, they’re overpromising because they’re a long way from being able to do that.”

Neuralink did not respond to a request for comment as of publication time.

Friday, February 25, 2022

Public Deliberation about Gene Editing in the Wild

M. K. Gusmano, E. Kaebnick, et al. (2021).
Hastings Center Report
10.1002/hast.1318, 51, S2, (S34-S41).

Abstract

Genetic editing technologies have long been used to modify domesticated nonhuman animals and plants. Recently, attention and funding have also been directed toward projects for modifying nonhuman organisms in the shared environment—that is, in the “wild.” Interest in gene editing nonhuman organisms for wild release is motivated by a variety of goals, and such releases hold the possibility of significant, potentially transformative benefit. The technologies also pose risks and are often surrounded by a high uncertainty. Given the stakes, scientists and advisory bodies have called for public engagement in the science, ethics, and governance of gene editing research in nonhuman organisms. Most calls for public engagement lack details about how to design a broad public deliberation, including questions about participation, how to structure the conversations, how to report on the content, and how to link the deliberations to policy. We summarize the key design elements that can improve broad public deliberations about gene editing in the wild.

Here is the gist of the paper:

We draw on interdisciplinary scholarship in bioethics, political science, and public administration to move forward on this knot of conceptual, normative, and practical problems. When is broad public deliberation about gene editing in the wild necessary? And when it is required, how should it be done? These questions lead to a suite of further questions about, for example, the rationale and goals of deliberation, the features of these technologies that make public deliberation appropriate or inappropriate, the criteria by which “stakeholders” and “relevant publics” for these uses might be identified, how different approaches to public deliberation map onto the challenges posed by the technologies, how the topic to be deliberated upon should be framed, and how the outcomes of public deliberation can be meaningfully connected to policy-making.

Thursday, February 24, 2022

Robot performs first laparoscopic surgery without human help (and outperformed human doctors)

Johns Hopkins University. (2022, January 26). 
ScienceDaily. Retrieved January 28, 2022

A robot has performed laparoscopic surgery on the soft tissue of a pig without the guiding hand of a human -- a significant step in robotics toward fully automated surgery on humans. Designed by a team of Johns Hopkins University researchers, the Smart Tissue Autonomous Robot (STAR) is described today in Science Robotics.

"Our findings show that we can automate one of the most intricate and delicate tasks in surgery: the reconnection of two ends of an intestine. The STAR performed the procedure in four animals and it produced significantly better results than humans performing the same procedure," said senior author Axel Krieger, an assistant professor of mechanical engineering at Johns Hopkins' Whiting School of Engineering.

The robot excelled at intestinal anastomosis, a procedure that requires a high level of repetitive motion and precision. Connecting two ends of an intestine is arguably the most challenging step in gastrointestinal surgery, requiring a surgeon to suture with high accuracy and consistency. Even the slightest hand tremor or misplaced stitch can result in a leak that could have catastrophic complications for the patient.

Working with collaborators at the Children's National Hospital in Washington, D.C. and Jin Kang, a Johns Hopkins professor of electrical and computer engineering, Krieger helped create the robot, a vision-guided system designed specifically to suture soft tissue. Their current iteration advances a 2016 model that repaired a pig's intestines accurately, but required a large incision to access the intestine and more guidance from humans.

The team equipped the STAR with new features for enhanced autonomy and improved surgical precision, including specialized suturing tools and state-of-the art imaging systems that provide more accurate visualizations of the surgical field.

Soft-tissue surgery is especially hard for robots because of its unpredictability, forcing them to be able to adapt quickly to handle unexpected obstacles, Krieger said. The STAR has a novel control system that can adjust the surgical plan in real time, just as a human surgeon would.

Monday, April 19, 2021

The Military Is Funding Ethicists to Keep Its Brain Enhancement Experiments in Check

Sara Scoles
Medium.com
Originally posted 1 April 21

Here is an excerpt:

The Department of Defense has already invested in a number of projects to which the Minerva research has relevance. The Army Research Laboratory, for example, has funded researchers who captured and transmitted a participant’s thoughts about a character’s movement in a video game, using magnetic stimulation to beam those neural instructions to another person’s brain and cause movement. And it has supported research using deep learning algorithms and EEG readings to predict a person’s “drowsy and alert” states.

Evans points to one project funded by Defense Advanced Research Projects Agency (DARPA): Scientists tested a BCI that allowed a woman with quadriplegia to drive a wheelchair with her mind. Then, “they disconnected the BCI from the wheelchair and connected to a flight simulator,” Evans says, and she brainfully flew a digital F-35. “DARPA has expressed pride that their work can benefit civilians,” says Moreno. “That helps with Congress and with the public so it isn’t just about ‘supersoldiers,’” says Moreno.

Still, this was a civilian participant, in a Defense-funded study, with “fairly explicitly military consequences,” says Evans. And the big question is whether the experiment’s purpose justifies the risks. “There’s no obvious therapeutic reason for learning to fly a fighter jet with a BCI,” he says. “Presumably warfighters have a job that involves, among other things, fighter jets, so there might be a strategic reason to do this experiment. Civilians rarely do.”

It’s worth noting that warfighters are, says Moreno, required to take on more risks than the civilians they are protecting, and in experiments, military members may similarly be asked to shoulder more risk than a regular-person participant.

DARPA has also worked on implants that monitor mood and boost the brain back to “normal” if something looks off, created prosthetic limbs animated by thought, and made devices that improve memory. While those programs had therapeutic aims, the applications and follow-on capabilities extend into the enhancement realm — altering mood, building superstrong bionic arms, generating above par memory.

Thursday, March 4, 2021

‘Pastorally dangerous’: U.S. bishops risk causing confusion about vaccines, ethicists say

Michael J. O’Loughlin
America Magazine
Originally published March 02, 2021

Here is an excerpt:

Anthony Egan, S.J., a Jesuit priest and lecturer in theology in South Africa, said church leaders publishing messages about hypothetical situations during a crisis is “unhelpful” as Catholics navigate life in a pandemic.

“I think it’s pastorally dangerous because people are dealing with all kinds of crises—people are faced with unemployment, people are faced with disease, people are faced with death—and to make this kind of statement just adds to the general feeling of unease, a general feeling of crisis,” Father Egan said, noting that in South Africa, which has been hard hit by a more aggressive variant, the Johnson & Johnson vaccine is the only available option. “I don’t think that’s pastorally helpful.”

The choice about taking a vaccine like Johnson & Johnson’s must come down to individual conscience, he said. “I think it’s irresponsible to make a claim that you must absolutely not or absolutely must take the drug,” he said.

Ms. Fullam agreed, saying modern life is filled with difficult dilemmas stemming from previous injustices and “one of the great things about the Catholic moral tradition is that we recognize the world is a messy place, but we don’t insist Catholics stay away from that messiness.” Catholics, she said, are called “to think about how to make the situation better” rather than retreat in the face of complexity and given the ongoing pandemic, receiving a vaccine with a remote connection to abortion could be the right decision—especially in communities where access to vaccines might be difficult.

Friday, July 3, 2020

American Psychiatric Association Presidential Task Force to Address Structural Racism Throughout Psychiatry

Press Release
American Psychiatric Association
2 July 2020

The American Psychiatric Association today announced the members and charge of its Presidential Task Force to Address Structural Racism Throughout Psychiatry. The
Task Force was initially described at an APA Town Hall on June 15 amidst rising calls from psychiatrists for action on racism. It held its first meeting on June 27, and efforts, including the planning of future town halls, surveys and the establishment of related committees, are underway.

Focusing on organized psychiatry, psychiatrists, psychiatric trainees, psychiatric patients, and others who work to serve psychiatric patients, the Task Force is initially charged with:
  1. Providing education and resources on APA’s and psychiatry’s history regarding structural racism;
  2. Explaining the current impact of structural racism on the mental health of our patients and colleagues;
  3. Developing achievable and actionable recommendations for change to eliminate structural racism in the APA and psychiatry now and in the future;
  4. Providing reports with specific recommendations for achievable actions to the APA Board of Trustees at each of its meetings through May 2021; and
  5. Monitoring the implementation of tasks 1-4.

Monday, December 30, 2019

23 and Baby

Tanya Lewis
nature.com
Originally posted 4 Dec 19

Here are two excerpts:

Proponents say that genetic testing of newborns can help diagnose a life-threatening childhood-onset disease in urgent cases and could dramatically increase the number of genetic conditions all babies are screened for at birth, enabling earlier diagnosis and treatment. It could also inform parents of conditions they could pass on to future children or of their own risk of adult-onset diseases. Genetic testing could detect hundreds or even thousands of diseases, an order of magnitude more than current heel-stick blood tests—which all babies born in the U.S. undergo at birth—or confirm results from such a test.

But others caution that genetic tests may do more harm than good. They could miss some diseases that heel-stick testing can detect and produce false positives for others, causing anxiety and leading to unnecessary follow-up testing. Sequencing children’s DNA also raises issues of consent and the prospect of genetic discrimination.

Regardless of these concerns, newborn genetic testing is already here, and it is likely to become only more common. But is the technology sophisticated enough to be truly useful for most babies? And are families—and society—ready for that information?

(cut)

Then there’s the issue of privacy. If the child’s genetic information is stored on file, who has access to it? If the information becomes public, it could lead to discrimination by employers or insurance companies. The Genetic Information Nondiscrimination Act (GINA), passed in 2008, prohibits such discrimination. But GINA does not apply to employers with fewer than 15 employees and does not cover insurance for long-term care, life or disability. It also does not apply to people employed and insured by the military’s Tricare system, such as Rylan Gorby. When his son’s genome was sequenced, researchers also obtained permission to sequence Rylan’s genome, to determine if he was a carrier for the rare hemoglobin condition. Because it manifests itself only in childhood, Gorby decided taking the test was worth the risk of possible discrimination.

The info is here.

Friday, October 18, 2019

The Koch-backed right-to-try law has been a bust, but still threatens our health

Michael Hiltzik
The Los Angeles Times
Originally posted September 17, 2019

The federal right-to-try law, signed by President Trump in May 2018 as a sop to right-wing interests, including the Koch brothers network, always was a cruel sham perpetrated on sufferers of intractably fatal diseases.

As we’ve reported, the law was promoted as a compassionate path to experimental treatments for those patients — but in fact was a cynical ploy aimed at emasculating the Food and Drug Administration in a way that would undermine public health and harm all patients.

Now that a year has passed since the law’s enactment, the assessments of how it has functioned are beginning to flow in. As NYU bioethicist Arthur Caplan observed to Ed Silverman’s Pharmalot blog, “the right to try remains a bust.”

His judgment is seconded by the veteran pseudoscience debunker David Gorski, who writes: “Right-to-try has been a spectacular failure thus far at getting terminally ill patients access to experimental drugs.”

That should come as no surprise, Gorski adds, because “right-to-try was never about helping terminally ill patients. ... It was always about ideology more than anything else. It was always about weakening the FDA’s ability to regulate drug approval.”

The info is here.

Friday, September 20, 2019

The crossroads between ethics and technology

Arrow indicating side road in mountain landscapeTehilla Shwartz Altshuler
Techcrunch.com
Originally posted August 6, 2019

Here is an excerpt:

The first relates to ethics. If anything is clear today in the world of technology, it is the need to include ethical concerns when developing, distributing, implementing and using technology. This is all the more important because in many domains there is no regulation or legislation to provide a clear definition of what may and may not be done. There is nothing intrinsic to technology that requires that it pursue only good ends. The mission of our generation is to ensure that technology works for our benefit and that it can help realize social ideals. The goal of these new technologies should not be to replicate power structures or other evils of the past. 

Startup nation should focus on fighting crime and improving autonomous vehicles and healthcare advancements. It shouldn’t be running extremist groups on Facebook, setting up “bot farms” and fakes, selling attackware and spyware, infringing on privacy and producing deepfake videos.

The second issue is the lack of transparency. The combination of individuals and companies that have worked for, and sometimes still work with, the security establishment frequently takes place behind a thick screen of concealment. These entities often evade answering challenging questions that result from the Israeli Freedom of Information law and even recourse to the military censor — a unique Israeli institution — to avoid such inquires.


Monday, August 5, 2019

Ethical considerations in assessment and behavioral treatment of obesity: Issues and practice implications for clinical health psychologists

Williamson, T. M., Rash, J. A., Campbell, T. S., & Mothersill, K. (2019).
Professional Psychology: Research and Practice. Advance online publication.
http://dx.doi.org/10.1037/pro0000249

Abstract

The obesity epidemic in the United States and Canada has been accompanied by an increased demand on behavioral health specialists to provide comprehensive behavior therapy for weight loss (BTWL) to individuals with obesity. Clinical health psychologists are optimally positioned to deliver BTWL because of their advanced competencies in multimodal assessment, training in evidence-based methods of behavior change, and proficiencies in interdisciplinary collaboration. Although published guidelines provide recommendations for optimal design and delivery of BTWL (e.g., behavior modification, cognitive restructuring, and mindfulness practice; group-based vs. individual therapy), guidelines on ethical issues that may arise during assessment and treatment remain conspicuously absent. This article reviews clinical practice guidelines, ethical codes (i.e., the Canadian Code of Ethics for Psychologists and the American Psychological Association Ethical Principles of Psychologists), and the extant literature to highlight obesity-specific ethical considerations for psychologists who provide assessment and BTWL in health care settings. Five key themes emerge from the literature: (a) informed consent (instilling realistic treatment expectations; reasonable alternatives to BTWL; privacy and confidentiality); (b) assessment (using a biopsychosocial approach; selecting psychological tests); (c) competence and scope of practice (self-assessment; collaborative care); (d) recognition of personal bias and discrimination (self-examination, diversity); and (e) maximizing treatment benefit while minimizing harm. Practical recommendations grounded in the American Psychological Association’s competency training model for clinical health psychologists are discussed to assist practitioners in addressing and mitigating ethical issues in practice.

Friday, June 14, 2019

The Ethics of Treating Loved Ones

Christopher Cheney
www.medpagetoday.com
Originally posted May 19, 2019

When treating family members, friends, colleague, or themselves, ER physicians face ethical, professional, patient welfare, and liability concerns, a recent research article found.

Similar to situations arising in the treatment of VIP patients, ER physicians treating loved ones or close associates may vary their customary medical care from the standard treatment and inadvertently produce harm rather than benefit.

"Despite being common, this practice raises ethical concerns and concern for the welfare of both the patient and the physician," the authors of the recent article wrote in the American Journal of Emergency Medicine.

There are several liability concerns for clinicians, the lead author explained.


"Doctors would be held to the same standard of care as for other patients, and if care is violated and leads to damages, they could be liable. Intuitively, family and friends might be less likely to sue but that is not true of subordinates. In addition, as we state in the paper, for most ED physicians, practice outside of the home institution is not a covered event by the malpractice insurer," said Joel Geiderman, MD, professor and co-chairman of emergency medicine, Department of Emergency Medicine, Cedars-Sinai Medical Center, Los Angeles.

The info is here.

Tuesday, March 26, 2019

Does AI Ethics Have a Bad Name?

Calum Chace
Forbes.com
Originally posted March 7, 2019

Here is an excerpt:

Artificial intelligence is a technology, and a very powerful one, like nuclear fission.  It will become increasingly pervasive, like electricity. Some say that its arrival may even turn out to be as significant as the discovery of fire.  Like nuclear fission, electricity and fire, AI can have positive impacts and negative impacts, and given how powerful it is and it will become, it is vital that we figure out how to promote the positive outcomes and avoid the negative ones.

It's the bias that concerns people in the AI ethics community.  They want to minimise the amount of bias in the data which informs the AI systems that help us to make decisions – and ideally, to eliminate the bias altogether.  They want to ensure that tech giants and governments respect our privacy at the same time as they develop and deliver compelling products and services. They want the people who deploy AI to make their systems as transparent as possible so that in advance or in retrospect, we can check for sources of bias and other forms of harm.

But if AI is a technology like fire or electricity, why is the field called “AI ethics”?  We don’t have “fire ethics” or “electricity ethics,” so why should we have AI ethics?  There may be a terminological confusion here, and it could have negative consequences.

One possible downside is that people outside the field may get the impression that some sort of moral agency is being attributed to the AI, rather than to the humans who develop AI systems.  The AI we have today is narrow AI: superhuman in certain narrow domains, like playing chess and Go, but useless at anything else. It makes no more sense to attribute moral agency to these systems than it does to a car or a rock.  It will probably be many years before we create an AI which can reasonably be described as a moral agent.

The info is here.

Friday, July 27, 2018

Informed Consent and the Role of the Treating Physician

Holly Fernandez Lynch, Steven Joffe, and Eric A. Feldman
Originally posted June 21, 2018
N Engl J Med 2018; 378:2433-2438
DOI: 10.1056/NEJMhle1800071

Here are a few excerpts:

In 2017, the Pennsylvania Supreme Court ruled that informed consent must be obtained directly by the treating physician. The authors discuss the potential implications of this ruling and argue that a team-based approach to consent is better for patients and physicians.

(cut)

Implications in Pennsylvania and Beyond

Shinal has already had a profound effect in Pennsylvania, where it represents a substantial departure from typical consent practice.  More than half the physicians who responded to a recent survey conducted by the Pennsylvania Medical Society (PAMED) reported a change in the informed-consent process in their work setting; of that group, the vast majority expressed discontent with the effect of the new approach on patient flow and the way patients are served.  Medical centers throughout the state have changed their consent policies, precluding nonphysicians from obtaining patient consent to the procedures specified in the MCARE Act and sometimes restricting the involvement of physician trainees.  Some Pennsylvania institutions have also applied the Shinal holding to research, in light of the reference in the MCARE Act to experimental products and uses, despite the clear policy of the Food and Drug Administration (FDA) allowing investigators to involve other staff in the consent process.

(cut)

Selected State Informed-Consent Laws.

Although the Shinal decision is not binding outside of Pennsylvania, cases bearing on critical ethical dimensions of consent have a history of influence beyond their own jurisdictions.

The information is here.

Tuesday, July 24, 2018

Amazon, Google and Microsoft Employee AI Ethics Are Best Hope For Humanity

Paul Armstrong
Forbes.com
Originally posted June 26, 2018

Here is an excerpt:

Google recently lost the 'Don't be Evil' from its Code of Conduct documents but what were once guiding words now appear to be afterthoughts, and they aren't alone. From drone use to deals with the immigration services, large tech companies are looking to monetise their creations and who can blame them - projects can cost double digit millions as companies look to maintain an edge in a continually evolving marketplace. Employees are not without a conscience it seems, and as talent becomes the one thing that companies need in this war, that power needs to wielded, or we risk runaway train scenarios. If you want an idea of where things could go read this.

China is using AI software and facial recognition to determine who can travel, using what and where. You might think this is a ways away from being used on US or UK soil, but you'd be wrong. London has cameras on pretty much all streets, and the US has Amazon's Rekognition (Orlando just abandoned its use, but other tests remain active). Employees need to be the conscious of large entities and not only the ACLU or civil liberties inclined. From racist AI to faked video using machine learning to create better fakes, how you form technology matters as much as the why. Google has already mastered the technology to convince a human it is not talking to a robot thanks to um's and ah's - Google's next job is to convince us that is a good thing.

The information is here.

Wednesday, November 15, 2017

Catholic Hospital Group Grants Euthanasia to Mentally Ill, Defying Vatican

Francis X. Rocca
The Wall Street Journal
Originally posted October 27, 2017

A chain of Catholic psychiatric hospitals in Belgium is granting euthanasia to non-terminal patients, defying the Vatican and deepening a challenge to the church’s commitment to a constant moral code.

The board of the Brothers of Charity, Belgium’s largest single provider of psychiatric care, said the decision no longer belongs to Rome.

Truly Christian values, the board argued in September, should privilege a “person’s choice of conscience” over a “strict ethic of rules.”

The policy change is highly symbolic, said Didier Pollefeyt, a theologian and vice rector of the Catholic University of Leuven.

“The Brothers of Charity have been seen as a beacon of hope and resistance” to euthanasia, he said. “Now that the most Catholic institution gives up resistance, it looks like the most normal thing in the world.”

Belgium legalized euthanasia in 2002, the first country with a majority Catholic population to do so. Belgian bishops opposed the legislation, in line with the church’s catechism, which states that causing the death of the handicapped, sick or dying to eliminate their suffering is murder.

The article is here.

Sunday, October 1, 2017

Future Frankensteins: The Ethics of Genetic Intervention

Philip Kitcher
Los Angeles Review of Books
Originally posted September 4, 2017

Here is an excerpt:

The more serious argument perceives risks involved in germline interventions. Human knowledge is partial, and so perhaps we will fail to recognize some dire consequence of eliminating a particular sequence from the genomes of all members of our species. Of course, it is very hard to envisage what might go wrong — in the course of human evolution, many DNA sequences have arisen and disappeared. Moreover, in this instance, assuming a version of CRISPR-Cas9 sufficiently reliable to use on human beings, we could presumably undo whatever damage we had done. But, a skeptic may inquire, why take any risk at all? Surely somatic interventions will suffice. No need to tamper with the germline, since we can always modify the bodies of the unfortunate people afflicted with troublesome sequences.

Doudna and Sternberg point out, in a different context, one reason why this argument fails: some genes associated with disease act too early in development (in utero, for example). There is a second reason for failure. In a world in which people are regularly rescued through somatic interventions, the percentage of later generations carrying problematic sequences is likely to increase, with the consequence that ever more resources would have to be devoted to editing the genomes of individuals.  Human well-being might be more effectively promoted through a program of germline intervention, freeing those resources to help those who suffer in other ways. Once again, allowing editing of eggs and sperm seems to be the path of compassion. (The problems could be mitigated if genetic testing and in vitro fertilization were widely available and widely used, leaving somatic interventions as a last resort for those who slipped through the cracks. But extensive medical resources would still be required, and encouraging — or demanding — pre-natal testing and use of IVF would introduce a problematic and invasive form of eugenics.)

The article is here.