Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Feedback. Show all posts
Showing posts with label Feedback. Show all posts

Sunday, November 19, 2023

AI Will—and Should—Change Medical School, Says Harvard’s Dean for Medical Education

Hswen Y, Abbasi J.
JAMA. Published online October 25, 2023.

Here is an excerpt:

Dr Bibbins-Domingo: When these types of generative AI tools first came into prominence or awareness, educators, whatever level of education they were involved with, had to scramble because their students were using them. They were figuring out how to put up the right types of guardrails, set the right types of rules. Are there rules or danger zones right now that you’re thinking about?

Dr Chang: Absolutely, and I think there’s quite a number of these. This is a focus that we’re embarking on right now because as exciting as the future is and as much potential as these generative AI tools have, there are also dangers and there are also concerns that we have to address.

One of them is helping our students, who like all of us are still new to this within the past year, understand the limitations of these tools. Now these tools are going to get better year after year after year, but right now they are still prone to hallucinations, or basically making up facts that aren’t really true and yet saying them with confidence. Our students need to recognize why it is that these tools might come up with those hallucinations to try to learn how to recognize them and to basically be on guard for the fact that just because ChatGPT is giving you a very confident answer, it doesn’t mean it’s the right answer. And in medicine of course, that’s very, very important. And so that’s one—just the accuracy and the validity of the content that comes out.

As I wrote about in my Viewpoint, the way that these tools work is basically a very fancy form of autocomplete, right? It is essentially using a probabilistic prediction of what the next word is going to be. And so there’s no separate validity or confirmation of the factual material, and that’s something that we need to make sure that our students understand.

The other thing is to address the fact that these tools may inherently be structurally biased. Now, why would that be? Well, as we know, ChatGPT and these other large language models [LLMs] are trained on the world’s internet, so to speak, right? They’re trained on the noncopyrighted corpus of material that’s out there on the web. And to the extent that that corpus of material was generated by human beings who in their postings and their writings exhibit bias in one way or the other, whether intentionally or not, that’s the corpus on which these LLMs are trained. So it only makes sense that when we use these tools, these tools are going to potentially exhibit evidence of bias. And so we need our students to be very aware of that. As we have worked to reduce the effects of systematic bias in our curriculum and in our clinical sphere, we need to recognize that as we introduce this new tool, this will be another potential source of bias.


Here is my summary:

Bernard Chang, the Dean for Medical Education at Harvard Medical School, argues that artificial intelligence (AI) is poised to transform medical education. AI has the potential to improve the way medical students learn and train, and that medical schools should not only embrace AI, but also take an active role in shaping its development and use.

Chang identifies several areas where AI could have a significant impact on medical education. First, AI could be used to personalize learning and provide students with more targeted feedback. For example, AI-powered tutors could help students learn complex medical concepts at their own pace, and AI-powered diagnostic tools could help students practice their clinical skills.

Second, AI could be used to automate tasks that are currently performed by human instructors, such as grading exams and providing feedback on student assignments. This would free up instructors to focus on more high-value activities, such as mentoring students and leading discussions.

Third, AI could be used to create new educational experiences that are not possible with traditional methods. For example, AI could be used to create virtual patients that students can interact with to practice their clinical skills. AI could also be used to develop simulations of complex medical procedures that students can practice in a safe environment.

Chang argues that medical schools have a responsibility to prepare students for the future of medicine, which will be increasingly reliant on AI. He writes that medical schools should teach students how to use AI effectively, and how to critically evaluate AI-generated information. Medical schools should also develop new curricula that take into account the potential impact of AI on medical practice.

Saturday, March 13, 2021

The Dynamics of Motivated Beliefs

Zimmermann, Florian. 2020.
American Economic Review, 110 (2): 337-61.

Abstract
A key question in the literature on motivated reasoning and self-deception is how motivated beliefs are sustained in the presence of feedback. In this paper, we explore dynamic motivated belief patterns after feedback. We establish that positive feedback has a persistent effect on beliefs. Negative feedback, instead, influences beliefs in the short run, but this effect fades over time. We investigate the mechanisms of this dynamic pattern, and provide evidence for an asymmetry in the recall of feedback. Finally, we establish that, in line with theoretical accounts, incentives for belief accuracy mitigate the role of motivated reasoning.

From the Discussion

In light of the finding that negative feedback has only limited effects on beliefs in the long run, the question arises as to whether people should become entirely delusional about themselves over time. Note that results from the incentive treatments highlight that incentives for recall accuracy bound the degree of self-deception and thereby possibly prevent motivated agents from becoming entirely delusional. Further note that there exists another rather mechanical counterforce, which is that the perception of feedback likely changes as people become more confident. In terms of the experiment, if a subject believes that the chances of ranking in the upper half are mediocre, then that subject will likely perceive two comparisons out of three as positive feedback. If, instead, the same subject is almost certain they rank in the upper half, then that subject will likely perceive the same feedback as rather negative. Note that this “perception effect” is reflected in the Bayesian definition of feedback that we report as a robustness check in the Appendix of the paper. An immediate consequence of this change in perception is that the more confident an agent becomes, the more likely it is that they will obtain negative feedback. Unless an agent does not incorporate negative feedback at all, this should act as a force that bounds people’s delusions.

Friday, November 13, 2020

Cracking the Code of Sustained Collaboration

Francesca Gino
Harvard Business Review
Originally published Nov 2019

Ask any leader whether his or her organization values collaboration, and you’ll get a resounding yes. Ask whether the firm’s strategies to increase collaboration have been successful, and you’ll probably receive a different answer.

“No change seems to stick or to produce what we expected,” an executive at a large pharmaceutical company recently told me. Most of the dozens of leaders I’ve interviewed on the subject report similar feelings of frustration: So much hope and effort, so little to show for it.

One problem is that leaders think about collaboration too narrowly: as a value to cultivate but not a skill to teach. Businesses have tried increasing it through various methods, from open offices to naming it an official corporate goal. While many of these approaches yield progress—mainly by creating opportunities for collaboration or demonstrating institutional support for it—they all try to influence employees through superficial or heavy-handed means, and research has shown that none of them reliably delivers truly robust collaboration.

What’s needed is a psychological approach. When I analyzed sustained collaborations in a wide range of industries, I found that they were marked by common mental attitudes: widespread respect for colleagues’ contributions, openness to experimenting with others’ ideas, and sensitivity to how one’s actions may affect both colleagues’ work and the mission’s outcome. Yet these attitudes are rare. Instead, most people display the opposite mentality, distrusting others and obsessing about their own status. The task for leaders is to encourage an outward focus in everyone, challenging the tendency we all have to fixate on ourselves—what we’d like to say and achieve—instead of what we can learn from others.

Tuesday, September 22, 2020

How to be an ethical scientist

W. A. Cunningham, J. J. Van Bavel,
& L. H. Somerville
Science Magazine
Originally posted 5 August 20

True discovery takes time, has many stops and starts, and is rarely neat and tidy. For example, news that the Higgs boson was finally observed in 2012 came 48 years after its original proposal by Peter Higgs. The slow pace of science helps ensure that research is done correctly, but it can come into conflict with the incentive structure of academic progress, as publications—the key marker of productivity in many disciplines—depend on research findings. Even Higgs recognized this problem with the modern academic system: “Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough.”

It’s easy to forget about the “long view” when there is constant pressure to produce. So, in this column, we’re going to focus on the type of long-term thinking that advances science. For example, are you going to cut corners to get ahead, or take a slow, methodical approach? What will you do if your experiment doesn’t turn out as expected? Without reflecting on these deeper issues, we can get sucked into the daily goals necessary for success while failing to see the long-term implications of our actions.

Thinking carefully about these issues will not only impact your own career outcomes, but it can also impact others. Your own decisions and actions affect those around you, including your labmates, your collaborators, and your academic advisers. Our goal is to help you avoid pitfalls and find an approach that will allow you to succeed without impairing the broader goals of science.

Be open to being wrong

Science often advances through accidental (but replicable) findings. The logic is simple: If studies always came out exactly as you anticipated, then nothing new would ever be learned. Our previous theories of the world would be just as good as they ever were. This is why scientific discovery is often most profound when you stumble on something entirely new. Isaac Asimov put it best when he said, “The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ but ‘That’s funny ... .’”

The info is here.

Thursday, April 30, 2020

Difficult Conversations: Navigating the Tension between Honesty and Benevolence

E. Levine, A. Roberts, & T. Cohen
PsyArXiv
Originally published 18 Jul 19

Abstract

Difficult conversations are a necessary part of everyday life. To help children, employees, and partners learn and improve, parents, managers, and significant others are frequently tasked with the unpleasant job of delivering negative news and critical feedback. Despite the long-term benefits of these conversations, communicators approach them with trepidation, in part, because they perceive them as involving intractable moral conflict between being honest and being kind. In this article, we review recent research on egocentrism, ethics, and communication to explain why communicators overestimate the degree to which honesty and benevolence conflict during difficult conversations, document the conversational missteps people make as a result of this erred perception, and propose more effective conversational strategies that honor the long-term compatibility of honesty and benevolence. This review sheds light on the psychology of moral tradeoffs in conversation, and provides practical advice on how to deliver unpleasant information in ways that improve recipients’ welfare.

From the Summary:

Difficult conversations that require the delivery of negative information from communicators to targets involve perceived moral conflict between honesty and benevolence. We suggest that communicators exaggerate this conflict. By focusing on the short-term harm and unpleasantness associated with difficult conversations, communicators fail to realize that honesty and benevolence are actually compatible in many cases. Providing honest feedback can help a target to learn and grow, thereby improving the target’s overall welfare. Rather than attempting to resolve the honesty-benevolence dilemma via communication strategies that focus narrowly on the short-term conflict between honesty and emotional harm, we recommend that communicators instead invoke communication strategies that integrate and maximize both honesty and benevolence to ensure that difficult conversations lead to long-term welfare improvements for targets. Future research should explore the traits, mindsets, and contexts that might facilitate this approach. For example, creative people may be more adept at integrative solutions to the perceived honesty-dilemma conflict, and people who are less myopic and more cognizant of the future consequences of their choices may be better at recognizing the long-term benefits of honesty.

The info is here.

This research has relevance to psychotherapy.

Monday, August 27, 2018

Unwanted Events and Side Effects in Cognitive Behavior Therapy

Schermuly-Haupt, ML., Linden, M. & Rush, A.J.
Cognitive Therapy and Research
June 2018, Volume 42, Issue 3, pp 219–229

Abstract

Side effects (SEs) are negative reactions to an appropriately delivered treatment, which must be discriminated from unwanted events (UEs) or consequences of inadequate treatment. One hundred CBT therapists were interviewed for UEs and SEs in one of their current outpatients. Therapists reported 372 UEs in 98 patients and SEs in 43 patients. Most frequent were "negative wellbeing/distress" (27% of patients), "worsening of symptoms" (9%), "strains in family relations" (6%); 21% of patients suffered from severe or very severe and 5% from persistent SEs. SEs are unavoidable and frequent also in well-delivered CBT. They include both symptoms and the impairment of social life. Knowledge about the side effect profile can improve early recognition of SEs, safeguard patients, and enhance therapy outcome.

The research is here.

Saturday, October 7, 2017

Committee on Publication Ethics: Ethical Guidelines for Peer Reviewers

COPE Council.
Ethical guidelines for peer reviewers. 
September 2017. www.publicationethics.org

Peer reviewers play a role in ensuring the integrity of the scholarly record. The peer review
process depends to a large extent on the trust and willing participation of the scholarly
community and requires that everyone involved behaves responsibly and ethically. Peer
reviewers play a central and critical part in the peer review process, but may come to the role
without any guidance and be unaware of their ethical obligations. Journals have an obligation
to provide transparent policies for peer review, and reviewers have an obligation to conduct
reviews in an ethical and accountable manner. Clear communication between the journal
and the reviewers is essential to facilitate consistent, fair and timely review. COPE has heard
cases from its members related to peer review issues and bases these guidelines, in part, on
the collective experience and wisdom of the COPE Forum participants. It is hoped they will
provide helpful guidance to researchers, be a reference for editors and publishers in guiding
their reviewers, and act as an educational resource for institutions in training their students
and researchers.

Peer review, for the purposes of these guidelines, refers to reviews provided on manuscript
submissions to journals, but can also include reviews for other platforms and apply to public
commenting that can occur pre- or post-publication. Reviews of other materials such as
preprints, grants, books, conference proceeding submissions, registered reports (preregistered
protocols), or data will have a similar underlying ethical framework, but the process
will vary depending on the source material and the type of review requested. The model of
peer review will also influence elements of the process.

The guidelines are here.

Sunday, June 21, 2015

How the brain makes decisions

Science Simplified
Originally published on May 25, 2015

Here are two excerpts:

The results of the study drew three major conclusions. First, that human decision-making can perform just as well as current sophisticated computer models under non-Markovian conditions, such as the presence of a switch-state. This is a significant finding in our current efforts to model the human brain and develop artificial intelligence systems.

Secondly, that delayed feedback significantly impairs human decision-making and learning, even though it does not impact the performance of computer models, which have perfect memory. In the second experiment, it took human participants ten times more attempts to correctly recall and assign arrows to icons. Feedback is a crucial element of decision-making and learning. We set a goal, make a decision about how to achieve it, act accordingly, and then find out whether or not our goal was met. In some cases, e.g. learning to ride a bike, feedback on every decision we make for balancing, pedaling, braking etc. is instant: either we stay up and going, or we fall down. But in many other cases, such as playing backgammon, feedback is significantly delayed; it can take a while to find out if each move has led us to victory or not.

The entire article is here.

Source Material:

Clarke AM, Friedrich J, Tartaglia EM, Marchesotti S, Senn W, Herzog MH. Human and Machine Learning in Non-Markovian Decision Making. PLoS One 21 April 2015.