Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Ethical Dilemma. Show all posts
Showing posts with label Ethical Dilemma. Show all posts

Wednesday, October 25, 2023

The moral psychology of Artificial Intelligence

Bonnefon, J., Rahwan, I., & Shariff, A.
(2023, September 22). 

Abstract

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in Artificial Intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients, or solving moral dilemmas without human supervi- sion. Machines can be as perceived moral patients, whose outcomes can be affected by human decisions, with important consequences for human-machine cooperation. Machines can be moral proxies, that human agents and patients send as their delegates to a moral interaction, or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Conclusion

We have not addressed every issue at the intersection of AI and moral psychology. Questions about how people perceive AI plagiarism, about how the presence of AI agents can reduce or enhance trust between groups of humans, about how sexbots will alter intimate human relations, are the subjects of active research programs.  Many more yet unasked questions will only be provoked as new AI  abilities  develops. Given the pace of this change, any review paper will only be a snapshot.  Nevertheless, the very recent and rapid emergence of AI-driven technology is colliding with moral intuitions forged by culture and evolution over the span of millennia.  Grounding an imaginative speculation about the possibilities of AI with a thorough understanding of the structure of human moral psychology will help prepare for a world shared with, and complicated by, machines.

Wednesday, February 1, 2023

Ethics Consult: Keep Patient Alive Due to Spiritual Beliefs?

Jacob M. Appel
MedPageToday
Originally posted 28 Jan 23

Welcome to Ethics Consult -- an opportunity to discuss, debate (respectfully), and learn together. We select an ethical dilemma from a true, but anonymized, patient care case. You vote on your decision in the case and, next week, we'll reveal how you all made the call. Bioethicist Jacob M. Appel, MD, JD, will also weigh in with an ethical framework to help you learn and prepare.

The following case is adapted from Appel's 2019 book, Who Says You're Dead? Medical & Ethical Dilemmas for the Curious & Concerned.

Alexander is a 49-year-old man who comes to a prominent teaching hospital for a heart transplant. While awaiting the transplant, he is placed on a machine called a BIVAD, or biventricular assist device -- basically, an artificial heart the size of a small refrigerator to tide him over until a donor heart becomes available. While awaiting a heart, he suffers a severe stroke.

The doctors tell his wife, Katie, that no patient who has suffered such a severe stroke has ever regained consciousness and that Alexander is no longer a candidate for transplant. They would like to turn off the BIVAD and allow nature to take its course.

Not lost on these doctors is that Alexander occupies a desperately needed ICU bed, which could benefit other patients, and that his care costs the healthcare system upwards of $10,000 a day. The doctors are also aware than Alexander could survive for years on the BIVAD and the other machines that are now helping to keep him alive: a ventilator and a dialysis machine.

Katie refuses to yield to the request. "I realize he has no chance of recovery," she says. "But Alexander believed deeply in reincarnation. What mattered most to him was that he die at the right moment -- so that his soul could return to Earth in the body for which it was destined. To him, that would have meant keeping him on the machines until all brain function ceases, even if it means decades. I feel obligated to honor those wishes."

Saturday, February 26, 2022

Experts Are Ringing Alarms About Elon Musk’s Brain Implants

Noah Kirsch
Daily Beast
Posted 25 Jan 2021

Here is an excerpt:

“These are very niche products—if we’re really only talking about developing them for paralyzed individuals—the market is small, the devices are expensive,” said Dr. L. Syd Johnson, an associate professor in the Center for Bioethics and Humanities at SUNY Upstate Medical University.

“If the ultimate goal is to use the acquired brain data for other devices, or use these devices for other things—say, to drive cars, to drive Teslas—then there might be a much, much bigger market,” she said. “But then all those human research subjects—people with genuine needs—are being exploited and used in risky research for someone else’s commercial gain.”

In interviews with The Daily Beast, a number of scientists and academics expressed cautious hope that Neuralink will responsibly deliver a new therapy for patients, though each also outlined significant moral quandaries that Musk and company have yet to fully address.

Say, for instance, a clinical trial participant changes their mind and wants out of the study, or develops undesirable complications. “What I’ve seen in the field is we’re really good at implanting [the devices],” said Dr. Laura Cabrera, who researches neuroethics at Penn State. “But if something goes wrong, we really don't have the technology to explant them” and remove them safely without inflicting damage to the brain.

There are also concerns about “the rigor of the scrutiny” from the board that will oversee Neuralink’s trials, said Dr. Kreitmair, noting that some institutional review boards “have a track record of being maybe a little mired in conflicts of interest.” She hoped that the high-profile nature of Neuralink’s work will ensure that they have “a lot of their T’s crossed.”

The academics detailed additional unanswered questions: What happens if Neuralink goes bankrupt after patients already have devices in their brains? Who gets to control users’ brain activity data? What happens to that data if the company is sold, particularly to a foreign entity? How long will the implantable devices last, and will Neuralink cover upgrades for the study participants whether or not the trials succeed?

Dr. Johnson, of SUNY Upstate, questioned whether the startup’s scientific capabilities justify its hype. “If Neuralink is claiming that they’ll be able to use their device therapeutically to help disabled persons, they’re overpromising because they’re a long way from being able to do that.”

Neuralink did not respond to a request for comment as of publication time.

Thursday, October 10, 2019

Moral Distress and Moral Strength Among Clinicians in Health Care Systems: A Call for Research

Connie M. Ulrich and Christine Grady
NAM Perspectives. 
https://doi.org/10.31478/201909c


Here is an excerpt:

Evidence shows that dissatisfaction and wanting to leave one’s job—and the profession altogether—often follow morally distressing encounters. Ethics education that builds cognitive and communication skills, teaches clinicians ethical concepts, and helps them gain communication skills and confidence may be essential in building moral strength. One study found, for example, that among practicing nurses and social workers, those with the least ethics education were also the least confident, the least likely to use ethics resources (if available), and the least likely to act on their ethical concerns. In this national study, as many as 23 percent of nurses reported having had no ethics education at all. But the question remains—is ethics education enough?

Many factors likely support or hinder a clinician’s capacity and willingness to act with moral strength. More research is needed to investigate how interdisciplinary ethics education and institutional resources can help nurses, physicians, and others voice their ethical concerns, help them agree on morally acceptable actions, and support their capacity and propensity to act with moral strength and confidence. Research on moral distress and ethical concerns in everyday clinical practice can begin to build a knowledge base that will inform clinical training—in both educational and health care institutions—and that will help create organizational structures and processes to prepare and support clinicians to encounter potentially distressing situations with moral strength. Research can help tease out what is important and predictive for taking (or not taking) ethical action in morally distressing circumstances. This knowledge would be useful for designing strategies to support clinician well-being. Indeed, studies should focus on the influences that affect clinicians’ ability and willingness to become involved or take ownership of ethically-laden patient care issues, and their level of confidence in doing so.

Thursday, June 13, 2019

Moral dilemmas in (not) treating patients who feel they are a burden

Metselaar S, Widdershoven G.
[published online April 23, 2019]
Bioethics. 2019;33(4):431-438.

Abstract

Working as clinical ethicists in an academic hospital, we find that practitioners tend to take a principle‐based approach to moral dilemmas when it comes to (not) treating patients who feel like a burden, in which respect for autonomy tends to trump other principles. We argue that this approach insufficiently deals with the moral doubts of professionals with regard to feeling that you are a burden as a motive to decline or withdraw from treatment. Neither does it take into adequately account the specific needs of the patient that might underlie their feeling of being a burden to others. We propose a care ethics approach as an alternative. It focuses on being attentive and responsive to the caring needs of those involved in the care process—which can be much more specific than either receiving or withdrawing from treatment. This approach considers these needs in the context of the patient's identity, biography and relationships, and regards autonomy as relational rather than as individual. We illustrate the difference between these two approaches by means of the case of Mrs K. Furthermore, we show that a care ethics approach is in line with interventions that are found to alleviate feeling a burden and maintain that facilitating moral case deliberation among practitioners can supports them in taking a care ethics approach to moral dilemmas in (not) treating patients who feel like a burden.

The info is here.

Wednesday, May 15, 2019

Students' Ethical Decision‐Making When Considering Boundary Crossings With Counselor Educators

Stephanie T. Burns
Counseling and Values
First published: 10 April 2019
https://doi.org/10.1002/cvj.12094

Abstract

Counselor education students (N = 224) rated 16 boundary‐crossing scenarios involving counselor educators. They viewed boundary crossings as unethical and were aware of power differentials between the 2 groups. Next, they rated the scenarios again, after reviewing 1 of 4 ethical informational resources: relevant standards in the ACA Code of Ethics (American Counseling Association, 2014), 2 different boundary‐crossing decision‐making models, and a placebo. Although participants rated all resources except the placebo as moderately helpful, these resources had little to no influence on their ethical decision‐making. Only 47% of students in the 2 ethical decision‐making model groups reported they would use the model they were exposed to in the future when contemplating boundary crossings.

Here is a portion from Implications for Practice and Training

Counselor education students took conservative stances toward the 16 boundary-crossing scenarios with counselor educators. These findings support results of previous researchers who stated that students struggle with even the smallest of boundary crossings (Kozlowski et al., 2014) because they understand that power differentials have implications for grades, evaluations, recommendation letters, and obtaining authentic skill development feedback (Gu et al., 2011). Counselor educators need to be aware that students find not providing appropriate feedback because of the counselor educator’s personal feelings toward the student, not providing students with required supervision time in practicum, and taking first authorship when the student performed all the work on the submission as being as abusive as having sex with a student.

The research is here.

Wednesday, May 8, 2019

The ethics of emerging technologies

artificial intelligenceJessica Hallman
www.techxplore.com
Originally posted April 10, 2019


Here is an excerpt:

"There's a new field called data ethics," said Fonseca, associate professor in Penn State's College of Information Sciences and Technology and 2019-2020 Faculty Fellow in Penn State's Rock Ethics Institute. "We are collecting data and using it in many different ways. We need to start thinking more about how we're using it and what we're doing with it."

By approaching emerging technology with a philosophical perspective, Fonseca can explore the ethical dilemmas surrounding how we gather, manage and use information. He explained that with the rise of big data, for example, many scientists and analysts are foregoing formulating hypotheses in favor of allowing data to make inferences on particular problems.

"Normally, in science, theory drives observations. Our theoretical understanding guides both what we choose to observe and how we choose to observe it," Fonseca explained. "Now, with so much data available, science's classical picture of theory-building is under threat of being inverted, with data being suggested as the source of theories in what is being called data-driven science."

Fonseca shared these thoughts in his paper, "Cyber-Human Systems of Thought and Understanding," which was published in the April 2019 issue of the Journal of the Association of Information Sciences and Technology. Fonseca co-authored the paper with Michael Marcinowski, College of Liberal Arts, Bath Spa University, United Kingdom; and Clodoveu Davis, Computer Science Department, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil.

In the paper, the researchers propose a concept to bridge the divide between theoretical thinking and a-theoretical, data-driven science.

The info is here.

Thursday, January 31, 2019

A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values

Caroline Lester
The New Yorker
Originally posted January 24, 2019

Here is an excerpt:

The U.S. government has clear guidelines for autonomous weapons—they can’t be programmed to make “kill decisions” on their own—but no formal opinion on the ethics of driverless cars. Germany is the only country that has devised such a framework; in 2017, a German government commission—headed by Udo Di Fabio, a former judge on the country’s highest constitutional court—released a report that suggested a number of guidelines for driverless vehicles. Among the report’s twenty propositions, one stands out: “In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.” When I sent Di Fabio the Moral Machine data, he was unsurprised by the respondent’s prejudices. Philosophers and lawyers, he noted, often have very different understandings of ethical dilemmas than ordinary people do. This difference may irritate the specialists, he said, but “it should always make them think.” Still, Di Fabio believes that we shouldn’t capitulate to human biases when it comes to life-and-death decisions. “In Germany, people are very sensitive to such discussions,” he told me, by e-mail. “This has to do with a dark past that has divided people up and sorted them out.”

The info is here.

Wednesday, November 14, 2018

Moral resilience: how to navigate ethical complexity in clinical practice

Cynda Rushton
Oxford University Press
Originally posted October 12, 2018

Clinicians are constantly confronted with ethical questions. Recent examples of healthcare workers caught up in high-profile best-interest cases are on the rise, but decisions regarding the allocation of the clinician’s time and skills, or scare resources such as organs and medication, are everyday occurrences. The increasing pressure of “doing more with less” is one that can take its toll.

Dr Cynda Rushton is a professor of clinical ethics, and a proponent of ‘moral resilience’ as a pathway through which clinicians can lessen their experience of moral distress, and navigate the contentious issues they may face with a greater sense of integrity. In the video series below, she provides the guiding principles of moral resilience, and explores how they can be put into practice.



The videos are here.

Saturday, September 8, 2018

Silicon Valley Writes a Playbook to Help Avert Ethical Disasters

Arielle Pardes
www.wired.com
Originally posted August 7, 2018

Here is an excerpt:

The first section outlines 14 near-future scenarios, based on contemporary anxieties in the tech world that could threaten companies in the future. What happens, for example, if a company like Facebook purchases a major bank and becomes a social credit provider? What happens if facial-recognition technology becomes a mainstream tool, spawning a new category of apps that integrates the tech into activities like dating and shopping? Teams are encouraged to talk through each scenario, connect them back to the platforms or products they're developing, and discuss strategies to prepare for these possible futures.

Each of these scenarios came from contemporary "signals" identified by the Institute of the Future—the rise of "deep fakes," tools for "predictive justice," and growing concerns about technology addiction.

"We collect things like this that spark our imagination and then we look for patterns, relationships. Then we interview people who are making these technologies, and we start to develop our own theories about where the risks will emerge," says Jane McGonigal, the director of game research at the Institute of the Future and the research lead for the Ethical OS. "The ethical dilemmas are around issues further out than just the next release or next growth cycle, so we felt helping companies develop the imagination and foresight to think a decade out would allow more ethical action today."

The info is here.

Monday, August 20, 2018

Ethics and the pursuit of artificial intelligence

Daniel Wagner
South China Morning Post
Originally posted August 6, 2018

So many businesses and governments are scurrying to get into the artificial intelligence (AI) race that many appear to be losing sight of some important things that should matter along the way – such as legality, good governance, and ethics.

In the AI arena the stakes are extremely high and it is quickly becoming a free-for-all from data acquisition to the stealing of corporate and state secrets. The “rules of the road” are either being addressed along the way or not at all, since the legal regime governing who can do what to whom, and how, is either wholly inadequate or simply does not exist. As is the case in the cyber world, the law is well behind the curve.

Ethical questions abound with AI systems, raising questions about how machines recognise and process values and ethical paradigms. AI is certainly not unique among emerging technologies in creating ethical quandaries, but ethical questions in AI research and development present unique challenges in that they ask us to consider whether, when, and how machines should make decisions about human lives – and whose values should guide those decisions.

In a world filled with unintended consequences, will our collectively shared values fall by the wayside in an effort to reach AI supremacy? Will the notion of human accountability eventually disappear in an AI-dominated world? Could the commercial AI landscape evolve into a winner takes all arena in which only one firm or machine is left standing?

The information is here.

Wednesday, February 28, 2018

Can scientists agree on a code of ethics?

David Ryan Polgar
BigThink.com
Originally published January 30, 2018

Here is an excerpt:

Regarding the motivation for developing this Code of Ethics, Hug mentioned the threat of reduced credibility of research if the standards seem to loose. She mentioned the pressure that many young scientists face in being prolific with research, insinuating the tension with quantity versus quality. "We want research to remain credible because we want it to have an impact on policymakers, research being turned into action." One of the goals of Hug presenting about the Code of Ethics, she said, was to start having various research institutions endorse the document, and have those institutions start distributing the Code of Ethics within their network.

“All these goals will conflict with each other," said Jodi Halpern, referring to the issues that may get in the way of adopting a code of ethics for scientists. "People need rigorous education in ethical reasoning, which is just as rigorous as science education...what I’d rather have as a requirement, if I’d like to put teeth anywhere. I’d like to have every doctoral student not just have one of those superficial IRB fake compliance courses, but I’d like to have them have to pass a rigorous exam showing how they would deal with certain ethical dilemmas. And everybody who will be the head of a lab someday will have really learned how to do that type of thinking.”

The article is here.

Tuesday, February 13, 2018

How Should Physicians Make Decisions about Mandatory Reporting When a Patient Might Become Violent?

Amy Barnhorst, Garen Wintemute, and Marian Betz
AMA Journal of Ethics. January 2018, Volume 20, Number 1: 29-35.

Abstract

Mandatory reporting of persons believed to be at imminent risk for committing violence or attempting suicide can pose an ethical dilemma for physicians, who might find themselves struggling to balance various conflicting interests. Legal statutes dictate general scenarios that require mandatory reporting to supersede confidentiality requirements, but physicians must use clinical judgment to determine whether and when a particular case meets the requirement. In situations in which it is not clear whether reporting is legally required, the situation should be analyzed for its benefit to the patient and to public safety. Access to firearms can complicate these situations, as firearms are a well-established risk factor for violence and suicide yet also a sensitive topic about which physicians and patients might have strong personal beliefs.

The commentary is here.

Does Volk v. DeMeerleer Conflict with the AMA Code of Medical Ethics?

Jennifer L. Piel and Rejoice Opara
AMA Journal of Ethics. January 2018, Volume 20, Number 1: 10-18.

Abstract

A recent Washington State case revisits the obligation of mental health clinicians to protect third parties from the violent acts of their patients. Although the case of Volk v DeMeerleer raises multiple legal, ethical, and policy issues, this article will focus on a potential ethical conflict between the case law and professional guidelines, namely the American Medical Association’s Code of Medical Ethics.

Here is a portion of the conclusion:

The Volk case established legal precedent for outpatient mental health clinicians in Washington State. Future cases against clinicians for their patients’ harm to third parties (e.g., medical negligence, wrongful death) will be tried under the Volk standard. It will be up to the trier of fact to determine whether the victims of a patient’s violence were foreseeable and, if so, whether the clinician acted reasonably to protect them.

Without changes to this law, there is increased likelihood that future clinicians and employers in similar situations, fearful of being in Dr. Ashby’s position, will more willingly (and likely unhelpfully) breach patient confidentiality. This creates a dilemma for clinicians in Washington State, who could find themselves caught between trying to meet the requirements of the legal case and also adhering to their professional ethical guidelines.

The article is here.

Tuesday, December 26, 2017

Should Robots Have Rights? Four Perspectives

John Danaher
Philosophical Disquisitions
Originally published October 31. 2017

Here is an excerpt:

The Four Positions on Robot Rights

Before I get into the four perspectives that Gunkel reviews, I’m going to start by asking a question that he does not raise (in this paper), namely: what would it mean to say that a robot has a ‘right’ to something? This is an inquiry into the nature of rights in the first place. I think it is important to start with this question because it is worth having some sense of the practical meaning of robot rights before we consider their entitlement to them.

I’m not going to say anything particularly ground-breaking. I’m going to follow the standard Hohfeldian account of rights — one that has been used for over 100 years. According to this account, rights claims — e.g. the claim that you have a right to privacy — can be broken down into a set of four possible ‘incidents’: (i) a privilege; (ii) a claim; (iii) a power; and (iv) an immunity. So, in the case of a right to privacy, you could be claiming one or more of the following four things:
  • Privilege: That you have a liberty or privilege to do as you please within a certain zone of privacy.

  • Claim: That others have a duty not to encroach upon you in that zone of privacy.

  • Power: That you have the power to waive your claim-right not to be interfered with in that zone of privacy.

  • Immunity: That you are legally protected against others trying to waive your claim-right on your behalf
As you can see, these four incidents are logically related to one another. Saying that you have a privilege to do X typically entails that you have a claim-right against others to stop them from interfering with that privilege. That said, you don’t need all four incidents in every case.

The blog post is here.

Friday, December 8, 2017

Autonomous future could question legal ethics

Becky Raspe
Cleveland Jewish News
Originally published November 21, 2017

Here is an excerpt:

Northman said he finds the ethical implications of an autonomous future interesting, but completely contradictory to what he learned in law school in the 1990s.

“People were expected to be responsible for their activities,” he said. “And as long as it was within their means to stop something or more tellingly anticipate a problem before it occurs, they have an obligation to do so. When you blend software over the top of that this level of autonomy, we are left with some difficult boundaries to try and assess where a driver’s responsibility starts or the software programmers continues on.”

When considering the ethics surrounding autonomous living, Paris referenced the “trolley problem.” The trolley problem goes as this: there is an automated vehicle operating on an open road, and ahead there are five people in the road and one person off to the side. The question here, Paris said, is should the vehicle consider traveling on and hitting the five people or will it swerve and hit just the one?

“When humans are driving vehicles, they are the moral decision makers that make those choices behind the wheel,” she said. “Can engineers program automated vehicles to replace that moral thought with an algorithm? Will they prioritize the five lives or that one person? There are a lot of questions and not too many solutions at this point. With these ethical dilemmas, you have to be careful about what is being implemented.”

The article is here.

Friday, October 13, 2017

Moral Distress: A Call to Action

The Editor
AMA Journal of Ethics. June 2017, Volume 19, Number 6: 533-536.

During medical school, I was exposed for the first time to ethical considerations that stemmed from my new role in the direct provision of patient care. Ethical obligations were now both personal and professional, and I had to navigate conflicts between my own values and those of patients, their families, and other members of the health care team. However, I felt paralyzed by factors such as my relative lack of medical experience, low position in the hospital hierarchy, and concerns about evaluation. I experienced a profound and new feeling of futility and exhaustion, one that my peers also often described.

I have since realized that this experience was likely “moral distress,” a phenomenon originally described by Andrew Jameton in 1984. For this issue, the following definition, adapted from Jameton, will be used: moral distress occurs when a clinician makes a moral judgment about a case in which he or she is involved and an external constraint makes it difficult or impossible to act on that judgment, resulting in “painful feelings and/or psychological disequilibrium”. Moral distress has subsequently been shown to be associated with burnout, which includes poor coping mechanisms such as moral disengagement, blunting, denial, and interpersonal conflict.

Moral distress as originally conceived by Jameton pertained to nurses and has been extensively studied in the nursing literature. However, until a few years ago, the literature has been silent on the moral distress of medical students and physicians.

The article is here.

Tuesday, July 25, 2017

Should a rapist get Viagra or a robber get a cataracts op?

Tom Douglas
Aeon Magazine
Originally published on July 7, 2017

Suppose a physician is about to treat a patient for diminished sex drive when she discovers that the patient – let’s call him Abe – has raped several women in the past. Fearing that boosting his sex drive might lead Abe to commit further sex offences, she declines to offer the treatment. Refusal to provide medical treatment in this case strikes many as reasonable. It might not be entirely unproblematic, since some will argue that he has a human right to medical treatment, but many of us would probably think the physician is within her rights – she’s not obliged to treat Abe. At least, not if her fears about further offending are well-founded.

But now consider a different case. Suppose an eye surgeon is about to book Bert in for a cataract operation when she discovers that he is a serial bank robber. Fearing that treating his developing blindness might help Bert to carry off further heists, she declines to offer the operation. In many ways, this case mirrors that of Abe. But morally, it seems different. In this case, refusing treatment does not seem reasonable, no matter how well-founded the surgeon’s fear. What’s puzzling is why. Why is Bert’s surgeon obliged to treat his blindness, while Abe’s physician has no similar obligation to boost his libido?

Here’s an initial suggestion: diminished libido, it might be said, is not a ‘real disease’. An inconvenience, certainly. A disability, perhaps. But a genuine pathology? No. By contrast, cataract disease clearly is a true pathology. So – the argument might go – Bert has a stronger claim to treatment than Abe. But even if reduced libido is not itself a disease – a view that could be contested – it could have pathological origins. Suppose Abe has a disease that suppresses testosterone production, and thus libido. And suppose that the physician’s treatment would restore his libido by correcting this disease. Still, it would seem reasonable for her to refuse the treatment, if she had good grounds to believe providing it could result in further sex offences.

Monday, June 26, 2017

Antecedents and Consequences of Medical Students’ Moral Decision Making during Professionalism Dilemmas

Lynn Monrouxe, Malissa Shaw, and Charlotte Rees
AMA Journal of Ethics. June 2017, Volume 19, Number 6: 568-577.

Abstract

Medical students often experience professionalism dilemmas (which differ from ethical dilemmas) wherein students sometimes witness and/or participate in patient safety, dignity, and consent lapses. When faced with such dilemmas, students make moral decisions. If students’ action (or inaction) runs counter to their perceived moral values—often due to organizational constraints or power hierarchies—they can suffer moral distress, burnout, or a desire to leave the profession. If moral transgressions are rationalized as being for the greater good, moral distress can decrease as dilemmas are experienced more frequently (habituation); if no learner benefit is seen, distress can increase with greater exposure to dilemmas (disturbance). We suggest how medical educators can support students’ understandings of ethical dilemmas and facilitate their habits of enacting professionalism: by modeling appropriate resistance behaviors.

The article is here.

Friday, June 9, 2017

Are practitioners becoming more ethical?

By Rebecca Clay
The Monitor on Psychology
May 2017, Vol 48, No. 5
Print version: page 50

The results of research presented at APA's 2016 Annual Convention suggest that today's practitioners are less likely to commit such ethical violations as kissing a client, altering diagnoses to meet insurance criteria and treating homosexuality as pathological than their counterparts 30 years ago.

The research, conducted by psychologists Rebecca Schwartz-Mette, PhD, of the University of Maine at Orono and David S. Shen-Miller, PhD, of Bastyr University, replicated a 1987 study by Kenneth Pope, PhD, and colleagues published in the American Psychologist. Schwartz-Mette and Shen-Miller asked 453 practicing psychologists the same 83 questions posed to practitioners three decades ago.

The items included clear ethical violations, such as having sex with a client or supervisee. But they also included behaviors that could reasonably be construed as ethical, such as breaking confidentiality to report child abuse; behaviors that are ambiguous or not specifically prohibited, such as lending money to a client; and even some that don't seem controversial, such as shaking hands with a client. "Interestingly, 75 percent of the items from the Pope study were rated as less ethical in our study, suggesting a more general trend toward conservativism in multiple areas," says Schwartz-Mette.

The article is here.