Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Medical Education. Show all posts
Showing posts with label Medical Education. Show all posts

Sunday, November 19, 2023

AI Will—and Should—Change Medical School, Says Harvard’s Dean for Medical Education

Hswen Y, Abbasi J.
JAMA. Published online October 25, 2023.

Here is an excerpt:

Dr Bibbins-Domingo: When these types of generative AI tools first came into prominence or awareness, educators, whatever level of education they were involved with, had to scramble because their students were using them. They were figuring out how to put up the right types of guardrails, set the right types of rules. Are there rules or danger zones right now that you’re thinking about?

Dr Chang: Absolutely, and I think there’s quite a number of these. This is a focus that we’re embarking on right now because as exciting as the future is and as much potential as these generative AI tools have, there are also dangers and there are also concerns that we have to address.

One of them is helping our students, who like all of us are still new to this within the past year, understand the limitations of these tools. Now these tools are going to get better year after year after year, but right now they are still prone to hallucinations, or basically making up facts that aren’t really true and yet saying them with confidence. Our students need to recognize why it is that these tools might come up with those hallucinations to try to learn how to recognize them and to basically be on guard for the fact that just because ChatGPT is giving you a very confident answer, it doesn’t mean it’s the right answer. And in medicine of course, that’s very, very important. And so that’s one—just the accuracy and the validity of the content that comes out.

As I wrote about in my Viewpoint, the way that these tools work is basically a very fancy form of autocomplete, right? It is essentially using a probabilistic prediction of what the next word is going to be. And so there’s no separate validity or confirmation of the factual material, and that’s something that we need to make sure that our students understand.

The other thing is to address the fact that these tools may inherently be structurally biased. Now, why would that be? Well, as we know, ChatGPT and these other large language models [LLMs] are trained on the world’s internet, so to speak, right? They’re trained on the noncopyrighted corpus of material that’s out there on the web. And to the extent that that corpus of material was generated by human beings who in their postings and their writings exhibit bias in one way or the other, whether intentionally or not, that’s the corpus on which these LLMs are trained. So it only makes sense that when we use these tools, these tools are going to potentially exhibit evidence of bias. And so we need our students to be very aware of that. As we have worked to reduce the effects of systematic bias in our curriculum and in our clinical sphere, we need to recognize that as we introduce this new tool, this will be another potential source of bias.


Here is my summary:

Bernard Chang, the Dean for Medical Education at Harvard Medical School, argues that artificial intelligence (AI) is poised to transform medical education. AI has the potential to improve the way medical students learn and train, and that medical schools should not only embrace AI, but also take an active role in shaping its development and use.

Chang identifies several areas where AI could have a significant impact on medical education. First, AI could be used to personalize learning and provide students with more targeted feedback. For example, AI-powered tutors could help students learn complex medical concepts at their own pace, and AI-powered diagnostic tools could help students practice their clinical skills.

Second, AI could be used to automate tasks that are currently performed by human instructors, such as grading exams and providing feedback on student assignments. This would free up instructors to focus on more high-value activities, such as mentoring students and leading discussions.

Third, AI could be used to create new educational experiences that are not possible with traditional methods. For example, AI could be used to create virtual patients that students can interact with to practice their clinical skills. AI could also be used to develop simulations of complex medical procedures that students can practice in a safe environment.

Chang argues that medical schools have a responsibility to prepare students for the future of medicine, which will be increasingly reliant on AI. He writes that medical schools should teach students how to use AI effectively, and how to critically evaluate AI-generated information. Medical schools should also develop new curricula that take into account the potential impact of AI on medical practice.

Wednesday, January 15, 2020

How should we balance morality and the law?

Peter Koch
BCM Blogs
Originally posted 20 Dec 19

I was recently discussing a clinical case with medical students and physicians that involved balancing murky ethical issues and relevant laws. One participant leaned back and said: “Well, if we know the laws, then that’s the end of the story!”

The laws were clear about what ought to (legally) be done, but following the laws in this case would likely produce a bad outcome. We ended up divided about how to proceed with the case, but this discussion raised a bigger question: Exactly how much should we weigh the law in moral deliberations?

The basic distinction between the legal and moral is easy enough to identify. Most people agree that what is legal is not necessarily moral and what is immoral should not necessarily be illegal.

Slavery in the U.S. is commonly used as an example. “Of course,” a good modern citizen will say, “slavery was wrong even when it was legal.” The passing of the 13 amendment did not make slavery morally wrong; it was wrong already, and the legal structures finally caught up to the moral structures.

There are plenty of acts that are immoral but that should not be illegal. For example, perhaps it is immoral to gossip about your friend’s personal life, but most would agree that this sort of gossip should not be outlawed. The basic distinction between the legal and the moral appears to be simple enough.

Things get trickier, though, when we press more deeply into the matter.

The blog post is here.

Saturday, June 2, 2018

Preventing Med School Suicides

Roger Sergel
MegPage Today
Originally posted May 2, 2018

Here is an excerpt:

The medical education community needs to acknowledge the stress imposed on our medical learners as they progress from students to faculty. One of the biggest obstacles is changing the culture of medicine to not only understand the key burnout drivers and pain points but to invest resources into developing strategies which reduce stress. These strategies must include the medical learner taking ownership for the role they play in their lack of well-being. In addition, medical schools and healthcare organizations must reflect on their policies/processes which do not promote wellness. In both situations, there is pointing to the other group as the one who needs to change. Both are right.

We do need to change how we deliver a quality medical education AND we need our medical learners to reflect on their personal attitudes and openness to developing their resilience muscles to manage their stress. Equally important, we need to reduce the stigma of seeking help and break down the barriers which would allow our medical learners and physicians to seek help, when needed. We need to create support services which are convenient, accessible, and utilized.

What programs does your school have to support medical students' mental health?

The information is here.

Thursday, May 4, 2017

Rude Doctors, Rude Nurses, Rude Patients

Perri Klass
The New York Times
Originally published April 10, 2017

Here is an excerpt:

None of that is a surprise, and in fact, there is a good deal of literature to suggest that the medical environment includes all kinds of harshness, and that much of the rudeness you encounter as a doctor or nurse is likely to come from colleagues and co-workers.  An often-cited British study from 2015 called “Sticks and Stones” reported that rude, dismissive and aggressive communication between doctors (inevitably abbreviated, in a medical journal, as RDA communication) affected 31 percent of doctors several times a week or more. The researchers found that rudeness was more common from certain medical specialties: radiology, general surgery, neurosurgery and cardiology. They also established that higher status was somewhat protective; junior doctors and trainees encountered more rudeness.

In the United States, a number of studies have looked at how rudeness affects medical students and medical residents, as part of tracking the different ways in which they are often mistreated.

One article earlier this year in the journal Medical Teacher charted the effect on medical student morale of a variety of experiences, including verbal and nonverbal mistreatment, by everyone from attending physicians to residents to nurses. Mistreatment of medical students, the authors argued, may actually reflect serious problems on the part of their teachers, such as burnout, depression or substance abuse; it’s not enough to classify the “perpetrators” (that is, the rude people) as unprofessional and tell them to stop.

The article is here.

Thursday, April 4, 2013

Fewer Hours for Doctors-in-Training Leading To More Mistakes

By Alexandra Sifferlin
Time
Originally published March 26, 2013

Giving residents less time on duty and more time to sleep was supposed to lead to fewer medical errors. But the latest research shows that’s not the case. What’s going on?

Since 2011, new regulations restricting the number of continuous hours first-year residents spend on-call cut the time that trainees spend at the hospital during a typical duty session from 24 hours to 16 hours. Excessively long shifts, studies showed, were leading to fatigue and stress that hampered not just the learning process, but the care these doctors provided to patients.

And there were tragic examples of the high cost of this exhausting schedule. In 1984, 18-year old Libby Zion, who was admitted to a New York City hospital with a fever and convulsions, was treated by residents who ordered opiates and restraints when she became agitated and uncooperative. Busy overseeing other patients, the residents didn’t evaluate Zion again until hours later, by which time her fever has soared to 107 degrees and she went into cardiac arrest, and died. The case highlighted the enormous pressures on doctors-in-training, and the need for reform in the way residents were taught. In 1987, a New York state commission limited the number of hours that doctors could train in the hospital to 80 each week, which was less than the 100 hour a week shifts with 36 hour “call” times that were the norm at the time. In 2003, the Accreditation Council for Graduate Medical Education followed suit with rules for all programs that mandated that trainees could work no more than 24 consecutive hours.

The entire article is here.