Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Risks. Show all posts
Showing posts with label Risks. Show all posts

Monday, November 12, 2018

Optimality bias in moral judgment

Julian De Freitas and Samuel G. B. Johnson
Journal of Experimental Social Psychology
Volume 79, November 2018, Pages 149-163

Abstract

We often make decisions with incomplete knowledge of their consequences. Might people nonetheless expect others to make optimal choices, despite this ignorance? Here, we show that people are sensitive to moral optimality: that people hold moral agents accountable depending on whether they make optimal choices, even when there is no way that the agent could know which choice was optimal. This result held up whether the outcome was positive, negative, inevitable, or unknown, and across within-subjects and between-subjects designs. Participants consistently distinguished between optimal and suboptimal choices, but not between suboptimal choices of varying quality — a signature pattern of the Efficiency Principle found in other areas of cognition. A mediation analysis revealed that the optimality effect occurs because people find suboptimal choices more difficult to explain and assign harsher blame accordingly, while moderation analyses found that the effect does not depend on tacit inferences about the agent's knowledge or negligence. We argue that this moral optimality bias operates largely out of awareness, reflects broader tendencies in how humans understand one another's behavior, and has real-world implications.

The research is here.

Saturday, October 20, 2018

Who should answer the ethical questions surrounding artificial intelligence?

Jack Karsten
Brookings.edu
Originally published September 14, 2018

Continuing advancements in artificial intelligence (AI) for use in both the public and private sectors warrant serious ethical consideration. As the capability of AI improves, the issues of transparency, fairness, privacy, and accountability associated with using these technologies become more serious. Many developers in the private sector acknowledge the threats AI poses and have created their own codes of ethics to monitor AI development responsibly. However, many experts believe government regulation may be required to resolve issues ranging from racial bias in facial recognition software to the use of autonomous weapons in warfare.

On Sept. 14, the Center for Technology Innovation hosted a panel discussion at the Brookings Institution to consider the ethical dilemmas of AI. Brookings scholars Christopher Meserole, Darrell West, and William Galston were joined by Charina Chou, the global policy lead for emerging technologies at Google, and Heather Patterson, a senior research scientist at Intel.

Enjoy the video


Wednesday, September 12, 2018

How Could Commercial Terms of Use and Privacy Policies Undermine Informed Consent in the Age of Mobile Health?

Cynthia E. Schairer, Caryn Kseniya Rubanovich, and Cinnamon S. Bloss
AMA J Ethics. 2018;20(9):E864-872.

Abstract

Granular personal data generated by mobile health (mHealth) technologies coupled with the complexity of mHealth systems creates risks to privacy that are difficult to foresee, understand, and communicate, especially for purposes of informed consent. Moreover, commercial terms of use, to which users are almost always required to agree, depart significantly from standards of informed consent. As data use scandals increasingly surface in the news, the field of mHealth must advocate for user-centered privacy and informed consent practices that motivate patients’ and research participants’ trust. We review the challenges and relevance of informed consent and discuss opportunities for creating new standards for user-centered informed consent processes in the age of mHealth.

The info is here.

Sunday, September 2, 2018

Negative effects in psychotherapy

Alexander Rozental, Louis Castonguay, Sona Dimidjian, and others
BJPsych Open (2018) 4, 307–312.

Background

Psychotherapy can alleviate mental distress and improve quality of life, but little is known about its potential negative effects and how to determine their frequency.

Aims

To present a commentary on the current understanding and future research directions of negative effects in psychotherapy.

Method

An anonymous survey was distributed to a select group of researchers, using an analytical framework known as strengths, weaknesses, opportunities and threats.

Results

The researchers perceive an increased awareness of negative effects in psychotherapy in recent years, but also discuss some of the unresolved issues in relation to their definition, assessment and reporting. Qualitative methods and naturalistic designs are regarded as important to pursue, although a number of obstacles to using such methods are identified.

Conclusion

Negative effects of psychotherapy are multifaceted, warranting careful considerations in order for them to be monitored and reported in research settings and routine care.

The info is here.

Saturday, April 7, 2018

The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?

Sven Nyholm and Jilles Smids
Ethical Theory and Moral Practice
November 2016, Volume 19, Issue 5, pp 1275–1289

Abstract

Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100 % safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important ways in which the ethics of accident-algorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: (i) the basic decision-making situation faced by those who decide how self-driving cars should be programmed to deal with accidents; (ii) moral and legal responsibility; and (iii) decision-making in the face of risks and uncertainty. In discussing these three areas of disanalogy, we isolate and identify a number of basic issues and complexities that arise within the ethics of the programming of self-driving cars.

The article is here.

Monday, December 25, 2017

First Baby Born To U.S. Uterus Transplant Patient Raises Ethics Questions

Greta Jochem
NPR.org
Originally published December 5, 2017

Here is an excerpt:

We mention that not everyone is celebrating this. It raises some ethical questions. Is it possible with a procedure that is so experimental, so risky, to get informed consent from women who desperately want to have a baby?

Dr. Testa: I doubt it is possible for lay people to have informed consent about anything we do in medicine, if you ask me. This is even more complicated because we are going into uncharted waters. ... I think that we go through years of studying to understand what we do, and to achieve mastering the things we do. And then we pretend that in ten minutes we can explain something to anybody. ... I don't think it's really possible.

... We try to use the simplest terms we can think about and then we leave it to the autonomy of the patients, in this case not even patients, these women, to make the decisions. I think we really refrain, and it was really important for us, from any pressure of any kind from our side but then of course, the inner pressure of this woman to have a child I think drove the entire process and their decision at the end.

The article is here.

Monday, November 27, 2017

Social Media Channels in Health Care Research and Rising Ethical Issues

Samy A. Azer
AMA Journal of Ethics. November 2017, Volume 19, Number 11: 1061-1069.

Abstract

Social media channels such as Twitter, Facebook, and LinkedIn have been used as tools in health care research, opening new horizons for research on health-related topics (e.g., the use of mobile social networking in weight loss programs). While there have been efforts to develop ethical guidelines for internet-related research, researchers still face unresolved ethical challenges. This article investigates some of the risks inherent in social media research and discusses how researchers should handle challenges related to confidentiality, privacy, and consent when social media tools are used in health-related research.

Here is an excerpt:

Social Media Websites and Ethical Challenges

While one may argue that regardless of the design and purpose of social media websites (channels) all information conveyed through social media should be considered public and therefore usable in research, such a generalization is incorrect and does not reflect the principles we follow in other types of research. The distinction between public and private online spaces can blur, and in some situations it is difficult to draw a line. Moreover, as discussed later, social media channels operate under different rules than research, and thus using these tools in research may raise a number of ethical concerns, particularly in health-related research. Good research practice fortifies high-quality science; ethical standards, including integrity; and the professionalism of those conducting the research. Importantly, it ensures the confidentiality and privacy of information collected from individuals participating in the research. Yet, in social media research, there are challenges to ensuring confidentiality, privacy, and informed consent.

The article is here.

Friday, November 3, 2017

A fundamental problem with Moral Enhancement

Joao Fabiano
Practical Ethics
Originally posted October 13, 2017

Moral philosophers often prefer to conceive thought experiments, dilemmas and problem cases of single individuals who make one-shot decisions with well-defined short-term consequences. Morality is complex enough that such simplifications seem justifiable or even necessary for philosophical reflection.  If we are still far from consensus on which is the best moral theory or what makes actions right or wrong – or even if such aspects should be the central problem of moral philosophy – by considering simplified toy scenarios, then introducing group or long-term effects would make matters significantly worse. However, when it comes to actually changing human moral dispositions with the use of technology (i.e., moral enhancement), ignoring the essential fact that morality deals with group behaviour with long-ranging consequences can be extremely risky. Despite those risks, attempting to provide a full account of morality in order to conduct moral enhancement would be both simply impractical as well as arguably risky. We seem to be far away from such account, yet there are pressing current moral failings, such as the inability for proper large-scale cooperation, which makes the solution to present global catastrophic risks, such as global warming or nuclear war, next to impossible. Sitting back and waiting for a complete theory of morality might be riskier than attempting to fix our moral failing using incomplete theories. We must, nevertheless, proceed with caution and an awareness of such incompleteness. Here I will present several severe risks from moral enhancement that arise from focusing on improving individual dispositions while ignoring emergent societal effects and point to tentative solutions to those risks. I deem those emergent risks fundamental problems both because they lie at the foundation of the theoretical framework guiding moral enhancement – moral philosophy – and because they seem, at the time, inescapable; my proposed solution will aim at increasing awareness of such problems instead of directly solving them.

The article is here.

Sunday, October 1, 2017

Future Frankensteins: The Ethics of Genetic Intervention

Philip Kitcher
Los Angeles Review of Books
Originally posted September 4, 2017

Here is an excerpt:

The more serious argument perceives risks involved in germline interventions. Human knowledge is partial, and so perhaps we will fail to recognize some dire consequence of eliminating a particular sequence from the genomes of all members of our species. Of course, it is very hard to envisage what might go wrong — in the course of human evolution, many DNA sequences have arisen and disappeared. Moreover, in this instance, assuming a version of CRISPR-Cas9 sufficiently reliable to use on human beings, we could presumably undo whatever damage we had done. But, a skeptic may inquire, why take any risk at all? Surely somatic interventions will suffice. No need to tamper with the germline, since we can always modify the bodies of the unfortunate people afflicted with troublesome sequences.

Doudna and Sternberg point out, in a different context, one reason why this argument fails: some genes associated with disease act too early in development (in utero, for example). There is a second reason for failure. In a world in which people are regularly rescued through somatic interventions, the percentage of later generations carrying problematic sequences is likely to increase, with the consequence that ever more resources would have to be devoted to editing the genomes of individuals.  Human well-being might be more effectively promoted through a program of germline intervention, freeing those resources to help those who suffer in other ways. Once again, allowing editing of eggs and sperm seems to be the path of compassion. (The problems could be mitigated if genetic testing and in vitro fertilization were widely available and widely used, leaving somatic interventions as a last resort for those who slipped through the cracks. But extensive medical resources would still be required, and encouraging — or demanding — pre-natal testing and use of IVF would introduce a problematic and invasive form of eugenics.)

The article is here.

Thursday, August 31, 2017

Stress Leads to Bad Decisions. Here’s How to Avoid Them

Ron Carucci
Harvard Business Review
Originally posted August 29, 2017

Here is an excerpt:

Facing high-risk decisions. 

For routine decisions, most leaders fall into one of two camps: The “trust your gut” leader makes highly intuitive decisions, and the “analyze everything” leader wants lots of data to back up their choice. Usually, a leader’s preference for one of these approaches poses minimal threat to the decision’s quality. But the stress caused by a high-stakes decision can provoke them to the extremes of their natural inclination. The highly intuitive leader becomes impulsive, missing critical facts. The highly analytical leader gets paralyzed in data, often failing to make any decision. The right blend of data and intuition applied to carefully constructing a choice builds the organization’s confidence for executing the decision once made. Clearly identify the risks inherent in the precedents underlying the decision and communicate that you understand them. Examine available data sets, identify any conflicting facts, and vet them with appropriate stakeholders (especially superiors) to make sure your interpretations align. Ask for input from others who’ve faced similar decisions. Then make the call.

Solving an intractable problem. 

To a stressed-out leader facing a chronic challenge, it often feels like their only options are to either (1) vehemently argue for their proposed solution with unyielding certainty, or (2) offer ideas very indirectly to avoid seeming domineering and to encourage the team to take ownership of the challenge. The problem, again, is that neither extreme works. If people feel the leader is being dogmatic, they will disengage regardless of the merits of the idea. If they feel the leader lacks confidence in the idea, they will struggle to muster conviction to try it, concluding, “Well, if the boss isn’t all that convinced it will work, I’m not going to stick my neck out.”

The article is here.

Thursday, May 11, 2017

Is There a Duty to Use Moral Neurointerventions?

Michelle Ciurria
Topoi (2017).
doi:10.1007/s11245-017-9486-4

Abstract

Do we have a duty to use moral neurointerventions to correct deficits in our moral psychology? On their surface, these technologies appear to pose worrisome risks to valuable dimensions of the self, and these risks could conceivably weigh against any prima facie moral duty we have to use these technologies. Focquaert and Schermer (Neuroethics 8(2):139–151, 2015) argue that neurointerventions pose special risks to the self because they operate passively on the subject’s brain, without her active participation, unlike ‘active’ interventions. Some neurointerventions, however, appear to be relatively unproblematic, and some appear to preserve the agent’s sense of self precisely because they operate passively. In this paper, I propose three conditions that need to be met for a medical intervention to be considered low-risk, and I say that these conditions cut across the active/passive divide. A low-risk intervention must: (i) pass pre-clinical and clinical trials, (ii) fare well in post-clinical studies, and (iii) be subject to regulations protecting informed consent. If an intervention passes these tests, its risks do not provide strong countervailing reasons against our prima facie duty to undergo the intervention.

The article is here.

Wednesday, March 1, 2017

Clinicians’ Expectations of the Benefits and Harms of Treatments, Screening, and Tests

Tammy C. Hoffmann & Chris Del Mar
JAMA Intern Med. 
Published online January 9, 2017.
doi:10.1001/jamainternmed.2016.8254

Question

Do clinicians have accurate expectations of the benefits and harms of treatments, tests, and screening tests?

Findings

In this systematic review of 48 studies (13 011 clinicians), most participants correctly estimated 13% of the 69 harm expectation outcomes and 11% of the 28 benefit expectations. The majority of participants overestimated benefit for 32% of outcomes, underestimated benefit for 9%, underestimated harm for 34%, and overestimated harm for 5% of outcomes.

Meaning

Clinicians rarely had accurate expectations of benefits or harms, with inaccuracies in both directions, but more often underestimated harms and overestimated benefits.

The research is here.

Thursday, February 23, 2017

Equipoise in Research: Integrating Ethics and Science in Human Research

Alex John London
JAMA. 2017;317(5):525-526. doi:10.1001/jama.2017.0016

The principle of equipoise states that, when there is uncertainty or conflicting expert opinion about the relative merits of diagnostic, prevention, or treatment options, allocating interventions to individuals in a manner that allows the generation of new knowledge (eg, randomization) is ethically permissible. The principle of equipoise reconciles 2 potentially conflicting ethical imperatives: to ensure that research involving human participants generates scientifically sound and clinically relevant information while demonstrating proper respect and concern for the rights and interests of study participants.

The article is here.

Tuesday, February 21, 2017

Let's not be friends: A risk of Facebook

By Amy Novotney
The Monitor on Psychology
2017, Vol 48, No. 2
Print version: page 18

Here is an excerpt:

Talking to clients about their privacy concerns.

Kolmes advises all clinicians to discuss privacy risks involved in using social media with their clients and to work through how to handle a situation in which a therapist's name pops up under their "People You May Know" tab.

"It's about having clear and open conversations with your clients about what you're going to do to protect their privacy and confidentiality and avoid inviting a multiple relationship, and letting them know they can also discuss this with the therapist if it comes up on their end," Kolmes says. When she does receive a friend request from a client on Facebook, she waits until she sees him or her next in session and checks to see if the request was accidental or not. Regardless of whether they searched for her or just had her recommended as a friend, she reminds them about the importance of patient confidentiality and privacy, and notes that following one another on social media can add a "social" element to their work and can complicate matters when it comes to what the therapist is supposed to know or not know about them.

The article is here.

Saturday, June 18, 2016

The New Era of Informed Consent

Getting to a Reasonable-Patient Standard Through Shared Decision Making

Erica S. Spatz, Harlan M. Krumholz, MD, Benjamin W. Moulton
JAMA. 2016; 315(19):2063-2064. doi:10.1001/jama.2016.3070.

Here is an excerpt:

Informed consent discussions are often devoid of details about the material risks, benefits, and alternatives that are critical to meaningful patient decision making. Informed consent documents for procedures, surgery, and medical treatments with material risks (eg, radiation therapy) tend to be generic, containing information intended to protect the physician or hospital from litigation. These documents are often written at a high reading level and sometimes presented in nonlegible print, putting a premium on health literacy and proactive information-seeking behavior. Moreover, informed consent documents are often signed minutes before the start of a procedure, a time when patients are most vulnerable and least likely to ask questions—hardly consistent with what a reasonable patient would deem acceptable. In the United States, with the exception of 1 state, Washington, that explicitly recognizes shared decision making as an alternative to the traditional informed consent process, the law has yet to promote a process that truly supports a reasonable-patient–centered standard through shared decision making.

The article is here.

Friday, June 3, 2016

Disclosure of incidental constituents of psychotherapy as a moral obligation for psychiatrists and psychotherapists

Manuel Trachsel & Jens Gaab
J Med Ethics 2016;0:1–3.
doi:10.1136/medethics-2015-102986

Abstract

Informed consent to medical intervention reflects the moral principle of respect for autonomy and the patient's right to self-determination. In psychotherapy, this includes a requirement to inform the patient about those components of treatment purported to cause the therapeutic effect. This information must encompass positive expectancies of change and placebo-related or incidental constituent therapy effects, which are as important as specific intervention techniques for the efficacy of psychotherapy. There is a risk that informing the patient about possible incidental constituents of therapy may reduce or even completely impede these effects, with negative consequences for overall outcome. However, withholding information about incidental constituents of psychotherapy would effectively represent a paternalistic action at the expense of patient autonomy; whether such paternalism might in certain circumstances be justified forms part of the present discussion.

The article is here.

Tuesday, December 1, 2015

I can't get over the first time a patient killed herself

Anonymous
The Guardian
Originally posted November 12, 2015

It wasn’t an if, but a when. I knew it would happen eventually. I knew it would suck. Finding out a patient has killed themselves definitely does. And it should. The first time something like this happens and it stops sucking, I will immediately hand in my notice and do something else.

I found out one lunchtime. My colleague, Matt, and I were talking about responsibility and moaning about how, as therapists, we can be expected to be “on call” 24/7, soothing the distress of those we work with, far beyond the limits of the therapeutic hour we spend with patients. We had received a message from our patient Jenny’s friend asking if we could ring her back. In the kitchen, while I was eating risotto we decided between the two of us who should chase it up. Matt went to call her and returned a few minutes later looking grim. He asked if he could speak to me privately. I sensed something was seriously wrong. I said, “It’s not great, is it?”. He just said, “No”.

The entire article is here.

Monday, April 13, 2015

Antipsychotics, Other Psychotropics, and the Risk of Death in Patients With Dementia

Maust DT, Kim H, Seyfried LS, et al.
Antipsychotics, Other Psychotropics, and the Risk of Death in Patients With Dementia: Number Needed to Harm.
JAMA Psychiatry. Published online March 18, 2015.
doi:10.1001/jamapsychiatry.2014.3018.


Importance

Antipsychotic medications are associated with increased mortality in older adults with dementia, yet their absolute effect on risk relative to no treatment or an alternative psychotropic is unclear.

Objective

To determine the absolute mortality risk increase and number needed to harm (NNH) (ie, number of patients who receive treatment that would be associated with 1 death) of antipsychotic, valproic acid and its derivatives, and antidepressant use in patients with dementia relative to either no treatment or antidepressant treatment.

(cut)

Conclusions and Relevance

The absolute effect of antipsychotics on mortality in elderly patients with dementia may be higher than previously reported and increases with dose.

The research article is here.

Friday, February 13, 2015

Diagnosis or Delusion?

Patients who say they have Morgellons point to skin lesions as proof of their disease. But doctors believe the lesions are self-inflicted—that the condition is psychological, not dermatological.

By Katherine Foley
The Atlantic
Originally published January 18, 2015

Here is an excerpt:

When patients with these symptoms seek dermatological treatment, they’re usually told that they have delusions of parasitosis, a condition in which people are falsely convinced that they’re infested with parasites—told, in other words, that the crawling, itching sensations under their skin are only in their heads, and the fibers are remnants from clothing. Still, they pick away, trying to get the feeling out. According to Casey, most doctors refuse to even examine the alleged skin fibers and only offer anti-psychotic medication as treatment. It took her three years to find a dermatologist willing to treat her in any other way, and she and her husband had to drive all the way from California to Texas to see him.

The article outlining the conundrum is here.

Wednesday, October 15, 2014

Finding Risks, Not Answers, in Gene Tests

By Denise Grady and Andrew Pollack
The New York Times
Originally published September 22, 2014

Jennifer was 39 and perfectly healthy, but her grandmother had died young from breast cancer, so she decided to be tested for mutations in two genes known to increase risk for the disease.

When a genetic counselor offered additional tests for 20 other genes linked to various cancers, Jennifer said yes. The more information, the better, she thought.

The results, she said, were “surreal.” She did not have mutations in the breast cancer genes, but did have one linked to a high risk of stomach cancer. In people with a family history of the disease, that mutation is considered so risky that patients who are not even sick are often advised to have their stomachs removed. But no one knows what the finding might mean in someone like Jennifer, whose family has not had the disease.

The entire article is here.