Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Checklists. Show all posts
Showing posts with label Checklists. Show all posts

Wednesday, November 6, 2019

How to operationalize AI ethics

Khari Johnson
venturebeat.com
Originally published October 7, 2019

Here is an excerpt:

Tools, frameworks, and novel approaches

One of Thomas’ favorite AI ethics resources comes from the Markkula Center for Applied Ethics at Santa Clara University: a toolkit that recommends a number of processes to implement.

“The key one is ethical risk sweeps, periodically scheduling times to really go through what could go wrong and what are the ethical risks. Because I think a big part of ethics is thinking through what can go wrong before it does and having processes in place around what happens when there are mistakes or errors,” she said.

To root out bias, Sobhani recommends the What-If visualization tool from Google’s People + AI Research (PAIR) initiative as well as FairTest, a tool for “discovering unwarranted association within data-driven applications” from academic institutions like EPFL and Columbia University. She also endorses privacy-preserving AI techniques like federated learning to ensure better user privacy.

In addition to resources recommended by panelists, Algorithm Watch maintains a running list of AI ethics guidelines. Last week, the group found that guidelines released in March 2018 by IEEE, the world’s largest association for professional engineers, have seen little adoption at Facebook and Google.

The info is here.

Sunday, April 21, 2019

Microsoft will be adding AI ethics to its standard checklist for product release

Alan Boyle
www.geekwire.com
Originally posted March 25, 2019

Microsoft will “one day very soon” add an ethics review focusing on artificial-intelligence issues to its standard checklist of audits that precede the release of new products, according to Harry Shum, a top executive leading the company’s AI efforts.

AI ethics will join privacy, security and accessibility on the list, Shum said today at MIT Technology Review’s EmTech Digital conference in San Francisco.

Shum, who is executive vice president of Microsoft’s AI and Research group, said companies involved in AI development “need to engineer responsibility into the very fabric of the technology.”

Among the ethical concerns are the potential for AI agents to pick up all-too-human biases from the data on which they’re trained, to piece together privacy-invading insights through deep data analysis, to go horribly awry due to gaps in their perceptual capabilities, or simply to be given too much power by human handlers.

Shum noted that as AI becomes better at analyzing emotions, conversations and writings, the technology could open the way to increased propaganda and misinformation, as well as deeper intrusions into personal privacy.

The info is here.

Monday, September 10, 2018

Cognitive Biases Tricking Your Brain

Ben Yagoda
The Atlantic
September 2018 Issue

Here is an excerpt:

Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves. Instead, it has been devoted to changing behavior, in the form of incentives or “nudges.” For example, while present bias has so far proved intractable, employers have been able to nudge employees into contributing to retirement plans by making saving the default option; you have to actively take steps in order to not participate. That is, laziness or inertia can be more powerful than bias. Procedures can also be organized in a way that dissuades or prevents people from acting on biased thoughts. A well-known example: the checklists for doctors and nurses put forward by Atul Gawande in his book The Checklist Manifesto.

Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative. These experiments are based on the reactions and responses of randomly chosen subjects, many of them college undergraduates: people, that is, who care about the $20 they are being paid to participate, not about modifying or even learning about their behavior and thinking. But what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?

The info is here.

Wednesday, May 2, 2018

Institutional Research Misconduct Reports Need More Credibility

Gunsalus CK, Marcus AR, Oransky I.
JAMA. 2018;319(13):1315–1316.
doi:10.1001/jama.2018.0358

Institutions have a central role in protecting the integrity of research. They employ researchers, own the facilities where the work is conducted, receive grant funding, and teach many students about the research process. When questions arise about research misconduct associated with published articles, scientists and journal editors usually first ask the researchers’ institution to investigate the allegations and then report the outcomes, under defined circumstances, to federal oversight agencies and other entities, including journals.

Depending on institutions to investigate their own faculty presents significant challenges. Misconduct reports, the mandated product of institutional investigations for which US federal dollars have been spent, have a wide range of problems. These include lack of standardization, inherent conflicts of interest that must be addressed to directly ensure credibility, little quality control or peer review, and limited oversight. Even when institutions act, the information they release to the public is often limited and unhelpful.

As a result, like most elements of research misconduct, little is known about institutions’ responses to potential misconduct by their own members. The community that relies on the integrity of university research does not have access to information about how often such claims arise, or how they are resolved. Nonetheless, there are some indications that many internal reviews are deficient.

The article is here.

Thursday, November 1, 2012

The Use of Checklists in Research


By Kaitlin Gallagher
Inside HigherEd
Originally published October 21, 2012

We may not like to admit it, but many of us can describe a time when we’ve made a mistake during the progress of a study. These mistakes can range from mixing up wires or forgetting to turn on an amplifier to forgetting to collect an essential piece of information that either requires additional processing time or prevents you from analyzing a certain variable altogether. Increased computing power and technological advancements have also made it easier than ever to collect data. We can collect five measures simultaneously in one study and hundreds of trials in no time at all. But where does this leave us now? We must set up all of this equipment and make sure it works together, monitor it as well as our participant or specimen, and somehow sift through all the data post hoc. Even with a detailed lab notebook, its no wonder problems can arise. Even just writing this makes me feel…exposed, as if I’m the only one who struggles with this. It seems so simple, how can I not get it perfect every time? I always thought that I just had to work harder to not miss small steps, but maybe I just needed a different, yet structured, perspective on how to manage such a high volume of complex information.

My interest in general checklists above and beyond the detailed lab notebook began after reading The Checklist Manifesto by Atul Gawande, a surgeon and Harvard Professor (he also is the author of a New Yorker column on the same subject). The purpose of this book is to describe how a basic checklist can help us perform complex tasks consistently, correctly, and safely. Much of the book is told from the point of view of eliminating errors during surgery, but Gawande also draws on stories on how checklists have benefited those in construction, aviation, and investing.

The entire story is here.

Monday, September 24, 2012

Simple tool may help evaluate risk for violence among patients with mental illness

News Release
University of California at San Francisco

Here are some excerpts:

Mental health professionals, who often are tasked with evaluating and managing the risk of violence by their patients, may benefit from a simple tool to more accurately make a risk assessment, according to a recent study conducted at the University of California, San Francisco.

The research, led by psychiatrist Alan Teo, MD, when he was a UCSF medical resident, examined how accurate psychiatrists were at evaluating risk of violence by acutely ill patients admitted to psychiatric units.

(cut)

The first part of the study showed that inexperienced psychiatric residents performed no better than they would have by chance, whereas veteran psychiatrists were moderately successful in evaluating their patients' risk of violence.

However, the second part of the study showed that when researchers applied the information from the "Historical, Clinical, Risk Management?-Clinical" (HRC-20-C) scale - a brief, structured risk assessment tool - to the patients evaluated by residents, accuracy in identifying their potential for violence increased to a level nearly as high as the faculty psychiatrists', who had an average of 15 years more experience.

"Similar to a checklist a pilot might use before takeoff, the HRC-20-C has just five items that any trained mental health professional can use to assess their patients," Teo said.

"To improve the safety for staff and patients in high-risk settings, it is critical to teach budding psychiatrists and other mental health professionals how to use a practical tool such as this one."

The entire study is here.

Friday, September 14, 2012

The Relationship Between Level of Training and Accuracy of Violence Risk Assessment

by A. R. Teo, S. R. Holley, M. Leary, and D. E. McNeile
Psychiatric Services
Psychiatric Services 2012; doi: 10.1176/appi.ps.201200019

Objective  Although clinical training programs aspire to develop competency in violence risk assessment, little research has examined whether level of training is associated with the accuracy of clinicians’ evaluations of violence potential. This is the first study to compare the accuracy of risk assessments by experienced psychiatrists with those performed by psychiatric residents. It also examined the potential of a structured decision support tool to improve residents’ risk assessments.

Methods  The study used a retrospective case-control design. Medical records were reviewed for 151 patients who assaulted staff at a county hospital and 150 comparison patients. At admission, violence risk assessments had been completed by psychiatric residents (N=38) for 52 patients and by attending psychiatrists (N=41) for 249 patients. Trained research clinicians, who were blind to whether patients later became violent, coded information available at hospital admission by using a structured risk assessment tool—the Historical, Clinical, Risk Management–20 clinical subscale (HCR-20-C).

Results  Receiver operating characteristic analyses showed that clinical estimates of violence risk by attending psychiatrists had significantly higher predictive validity than those of psychiatric residents. Risk assessments by attending psychiatrists were moderately accurate (area under the curve [AUC]=.70), whereas assessments by residents were no better than chance (AUC=.52). Incremental validity analyses showed that addition of information from the HCR-20-C had the potential to improve the accuracy of risk assessments by residents to a level (AUC=.67) close to that of attending psychiatrists.

Conclusions  Having less training and experience was associated with inaccurate violence risk assessment. Structured methods hold promise for improving training in risk assessment for violence.

The full article is here.

Thursday, June 14, 2012

Examination of the Effectiveness of the Mental Health Environment of Care Checklist in Reducing Suicide on Inpatient Mental Health Units

Archives of General Psychiatry
Bradley V. Watts, MD, MPH; Yinong Young-Xu, ScD, MA, MS; Peter D. Mills, PhD, MS; Joseph M. DeRosier, PE, CSP; Jan Kemp, RN, PhD; Brian Shiner, MD, MPH; William E. Duncan, MD, PhD
Arch Gen Psychiatry. 2012;69(6):588-592. doi:10.1001/archgenpsychiatry.2011.1514

Abstract

Objective  To evaluate the effect of identification and abatement of hazards on inpatient suicides in the Veterans Health Administration (VHA).

Design, Setting, and Patients  The effect of implementation of a checklist (the Mental Health Environment of Care Checklist) and abatement process designed to remove suicide hazards from inpatient mental health units in all VHA hospitals was examined by measuring change in the rate of suicides before and after the intervention.

Intervention  Implementation of the Mental Health Environment of Care Checklist.

Results  Implementation of the Mental Health Environment of Care Checklist was associated with a reduction in the rate of completed inpatient suicide in VHA hospitals nationally.

Conclusions  Use of the Mental Health Environment of Care Checklist was associated with a substantial reduction in the inpatient suicide rate occurring on VHA mental health units. Use of the checklist in non-VHA hospitals may be warranted.

The entire article is free here.

Thanks to Ken Pope for this information.

An article about psychologists using checklists to reduce treatment  failure is here.

Friday, April 13, 2012

Can Checklists Help Reduce Treatment Failures?

Samuel Knapp, EdD, ABPP
Director of Professional Affairs

John Gavazzi, PhD, ABPP
Chair, PPA Ethics Committee

Originally Published in The Pennsylvania Psychologist

Checklists have become a stable feature of safety science. Airline pilots, for example, will meet with other members of the airline crew and go through a checklist before they fly a plane. Checklists have been proposed for surgeons (Gawande, 2009) and other physicians (Ely et al., 2010). Could checklists be useful for psychologists? If so, when could they be useful?
Using checklists for complex procedures such as general medicine, surgery, or psychological services may seem overly simplistic. However, proponents argue that checklists have value because of the complexity of these processes. Although the items in the checklist may seem basic, the risk that decision makers will make “dumb” mistakes increases when they are confronted with large amounts of complex information, much of which may be contradictory or ambiguous. Checklists can help health care professionals in difficult situations by reducing reliance on memory alone and, more importantly, by allowing them to step back, reflect on, and rethink their initial decisions (Ely et al., 2010).
For most patients, checklists would be unnecessary. Most patients do well in therapy, and 50% of patients terminate therapy in 10 sessions or fewer. Nonetheless, a few patients have more complicated problems, take more time to report therapeutic benefits, drop out of treatment unexpectedly, or otherwise fail in therapy. Checklists may be especially helpful with these difficult patients.
Knapp and Gavazzi (2012) proposed that treatment outcomes can be improved by using the “four-session rule.” According to this rule, if a patient is not making gains at the end of four sessions or does not have a good working relationship with the psychologist (in the absence of an obvious reason), the psychologist should reassess the treatment with this patient. The four-session rule does not require transferring the patient. Instead, the rule requires psychologists to reconsider the case, perhaps using the checklist provided at the end of this article.
Often, the reasons for a lack of improvement in psychotherapy may be obvious. For example, a patient enters therapy with a minor depression, but then gets worse because of a sudden and unanticipated layoff from work. The reason for the deterioration is clear and the psychologist has almost automatically talked to the patient about new modifications to treatment in light of the new life circumstances. However, the mere deterioration in the patient’s condition in this situation does not appear predictive of a treatment failure.
We consider the “four-session rule” as a useful heuristic because it helps control for over-optimism on the part of the psychologists. Evidence suggests that many psychologists are overly optimistic about their ability to help patients. For example, Stewart and Chambliss (2008) found that psychologists worked with patients for a median of 12 sessions before concluding that treatment was not working and considering alternative steps. Nonetheless, Lambert (2007) claims that his algorithm can predict risk for treatment failure by the fourth session with a high degree of accuracy. These two sources suggest that psychologists should adopt a lower threshold for considering a case at risk of failure.
            We suggest using a checklist when treating a patient who falls into the “four-session rule.” After identifying an area of concern from the checklist, the psychologist can follow up in more detail, such as by answering some of the questions footnoted.
            We know of no empirical studies to validate the use of the checklist for those patients at risk of treatment failure. Nonetheless, it does represent an effort of self-reflection that is needed in difficult cases. Readers may send any feedback or comments on this checklist to Drs. Sam Knapp or John Gavazzi.

Four-Session Checklist

Patient Collaboration (What does the patient say?)

YES ___ NO ___ 1 Does the patient think you have a good working relationship?

YES ___ NO ___ 2. Do you and your patient share the same treatment goals?[1]
YES ___ NO ___ 3. Does the patient report any progress in therapy?[2]
YES ___ NO ___ 4. Does the patient want to continue in treatment? [3] If so, does the
                                    patient see a need to modify treatment?

Additional Reflections (What do you think about the patient?)

YES ___ NO ___ 5. Do you believe you have a positive working relationship with your patient? (Does he or she trust you enough to share sensitive information and collaborate?)[4]

YES ___ NO ___ 6. Is your assessment of the patient sufficiently comprehensive?[5] Do you need to obtain additional information?

YES ___ NO ___ 7. Do unresolved clinical issues of significant concern impede the course of treatment (such as Axis II issues, possible or minimization of substance abuse, or ethical concerns)?

YES ___ NO ___ 8. Does the patient need a medical examination?

Documentation

YES ___ NO ___ 9. Have you documented appropriately?

References
Ely, J., Graber, M. L., & Croskerry, P. (2011). Checklists to reduce diagnostic errors. Academic Medicine, 86, 307-313.
Gawande, A. (2009). The checklist manifesto. NewYork: Holt.
Knapp, S., & Gavazzi, J. (2012). Ethical issues with difficult patients. In S. Knapp, M. C. Gottlieb, M. Handelsman, & L. VandeCreek, (Eds.), APA handbook of ethics in psychology. Washington, DC: American Psychological Association.
Lambert, M. (2007). Presidential address: What have we learned from a decade of research aimed at improving psychotherapy outcome in routine care? Psychotherapy Research, 17, 1-14.
Stewart, R., & Chambliss, D. (2008). Treatment failures in private practice: How do psychologists proceed? Professional Psychology: Research and Practice, 39, 176-181.


[1] Do you understand your patient’s goals and how he or she expects to achieve them? How do they correspond to your goals and preferred methods of treatment? If they differ, can you reach a compromise? Does the patient buy into treatment? Did you document the goals in your treatment notes? What did the patient say was particularly helpful or hindering about therapy? Have you incorporated your patient’s perceptions into your treatment plan?

[2] Do you agree on how to measure progress (self-report, reports of others, psychometric testing, non-reactive objective measures, etc.)? Does the patient need a medical examination?

[3] If yes, why?

[4] Can you identify what is happening in the relationship to prevent a therapeutic alliance? Does the patient identify an impasse? Do your feelings toward your patient compromise your ability to be helpful? If so, how can you change those feelings? Have you sought consultation on your relationship or feelings about the patient? If so, what did you learn?

[5] Have you reassessed the diagnosis or treatment methods using the BASIC ID, MOST CARE, or another system designed to review the presenting problem? Are you sensitive to cultural, gender-related status, sexual orientation, SES, or other factors? What input did you get from the patient, significant others of the patient, or consultants (this is especially important if there are life-endangering features)?