Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Errors. Show all posts
Showing posts with label Errors. Show all posts

Monday, June 23, 2025

Ambient Artificial Intelligence Scribes to Alleviate the Burden of Clinical Documentation

Tierney, A. A.,  et al. (2024).
NEJM Catalyst, 5(3).

Abstract

Clinical documentation in the electronic health record (EHR) has become increasingly burdensome for physicians and is a major driver of clinician burnout and dissatisfaction. Time dedicated to clerical activities and data entry during patient encounters also negatively affects the patient–physician relationship by hampering effective and empathetic communication and care. Ambient artificial intelligence (AI) scribes, which use machine learning applied to conversations to facilitate scribe-like capabilities in real time, has great potential to reduce documentation burden, enhance physician–patient encounters, and augment clinicians’ capabilities. The technology leverages a smartphone microphone to transcribe encounters as they occur but does not retain audio recordings. To address the urgent and growing burden of data entry, in October 2023, The Permanente Medical Group (TPMG) enabled ambient AI technology for 10,000 physicians and staff to augment their clinical capabilities across diverse settings and specialties. The implementation process leveraged TPMG’s extensive experience in large-scale technology instantiation and integration incorporating multiple training formats, at-the-elbow peer support, patient-facing materials, rapid-cycle upgrades with the technology vendor, and ongoing monitoring. In 10 weeks since implementation, the ambient AI tool has been used by 3,442 TPMG physicians to assist in as many as 303,266 patient encounters across a wide array of medical specialties and locations. In total, 968 physicians have enabled ambient AI scribes in ≥100 patient encounters, with one physician having enabled it to assist in 1,210 encounters. The response from physicians who have used the ambient AI scribe service has been favorable; they cite the technology’s capability to facilitate more personal, meaningful, and effective patient interactions and to reduce the burden of after-hours clerical work. In addition, early assessments of patient feedback have been positive, with some describing improved interaction with their physicians. Early evaluation metrics, based on an existing tool that evaluates the quality of human-generated scribe notes, find that ambient AI use produces high-quality clinical documentation for physicians’ editing. Further statistical analyses after AI scribe implementation also find that usage is linked with reduced time spent in documentation and in the EHR. Ongoing enhancements of the technology are needed and are focused on direct EHR integration, improved capabilities for incorporating medical interpretation, and enhanced workflow personalization options for individual users. Despite this technology’s early promise, careful and ongoing attention must be paid to ensure that the technology supports clinicians while also optimizing ambient AI scribe output for accuracy, relevance, and alignment in the physician–patient relationship.

Key Takeaways

• Ambient artificial intelligence (AI) scribes show early promise in reducing clinicians’ burden, with a regional pilot noting a reduction in the amount of time spent constructing notes among users.

• Ambient AI scribes were found to be acceptable among clinicians and patients, largely improving the experience of both parties, with some physicians noting the transformational nature of the technology on their care.

• Although a review of 35 AI-generated transcripts resulted in an average score of 48 of 50 in 10 key domains, AI scribes are not a replacement for clinicians. They can produce inconsistencies that require physicians’ review and editing to ensure that they remain aligned with the physician–patient relationship.

• Given the incredible pace of change, building a dynamic evaluation framework is essential to assess the performance of AI scribes across domains including engagement, effectiveness, quality, and safety.

Friday, December 13, 2024

A Case of Artificial Intelligence Chatbot Hallucination

Colasacco, C. J., & Born, H. L. (2024).
JAMA Otolaryngology–Head & Neck Surgery,
150(6), 457.

Despite the number of potential benefits of artificial intelligence (AI) use, examples from various fields of study have demonstrated that it is not an infallible technology. Our recent experience with AI chatbot tools is not to be overlooked by medical practitioners who use AI for practice guidance. By sharing this experience, we aim to remind clinicians to use AI in conjunction with human experience as well as highlight the assistance AI can provide with creative tasks.

Senior author and otolaryngologist, (H.B.; henceforth, Dr Born) was recently using the chatbot Bard (Google LLC; date of use: November 2023)1 to assist in preparing a brief presentation on the frontiers of airway care in laryngology for a continuing medical education course. After helping Dr Born gather information for an outline, the chatbot was asked to provide citations for the information provided. To check the accuracy of the results, Dr Born searched Google Scholar, PubMed, and the cited journal’s website to find the cited reference; however, she was unable to locate the first citation that was provided: “Telemedicine for the management of airway stenosis. Expert Rev Respir Med. 2022;16(10):1485-1493,” and its digital object identifier (DOI) produced an error message, “DOI not found.”

Because Dr Born could not find the article, she asked the chatbot to summarize its content, an action the chatbot performed quickly. Subsequently, Dr Born asked the chatbot to provide the internet link to the article, and again, the same citation for the article was provided. Still unable to find the article online, Dr Born asked, “Does this paper actually exist?” The chatbot responded, “I apologize for the mistake. I have double-checked, and the paper ‘Telemedicine for the management of airway stenosis by Thomas et al (2022)’ does not exist. I am still under development, and I am always learning. I will try my best to avoid making similar mistakes in the future.” The chatbot’s acknowledgment of its imperfections can be commended, but the reliability of the answers provided by this rapidly emerging technology is open to question.


Here are some thoughts:

This article discusses the issue of AI hallucination in medical practice, specifically focusing on two instances where AI chatbots generated incorrect information. The authors highlight the importance of understanding the limitations of AI-powered chatbots and emphasize the need for careful fact-checking and critical evaluation of their output, even when used for research purposes. The authors conclude that, despite these limitations, AI can still be a valuable tool for generating new research ideas, as demonstrated by their own experience with AI-inspired research on the use of telemedicine for airway stenosis.

Wednesday, February 14, 2024

Responding to Medical Errors—Implementing the Modern Ethical Paradigm

T. H. Gallagher &  A. Kachalia
The New England Journal of Medicine
January 13, 2024
DOI: 10.1056/NEJMp2309554

Here are some excerpts:

Traditionally, recommendations regarding responding to medical errors focused mostly on whether to disclose mistakes to patients. Over time, empirical research, ethical analyses, and stakeholder engagement began to inform expectations - which are now embodied in communication and resolution programs (CRPS) — for how health care professionals and organizations should respond not just to errors but any time patients have been harmed by medical care (adverse events). CRPs require several steps: quickly detecting adverse events, communicating openly and empathetically with patients and families about the event, apologizing and taking responsibility for errors, analyzing events and redesigning processes to prevent recurrences, supporting patients and clinicians, and proactively working with patients toward reconciliation. In this modern ethical paradigm, any time harm occurs, clinicians and health care organizations are accountable for minimizing suffering and promoting learning. However, implementing this ethical paradigm is challenging, especially when the harm was due to an error.

Historically, the individual physician was deemed the "captain of the ship," solely accountable for patient outcomes. Bioethical analyses emphasized the fiduciary nature of the doctor-patient relationship (i.e., doctors are in a position of greater knowledge and power) and noted that telling patients...about harmful errors supported patient autonomy and facilitated informed consent for future decisions. However, under U.S. tort law, physicians and organizations can be held accountable and financially liable for damages when they make negligent errors. As a result, ethical recommendations for openness were drowned out by fears of lawsuits and payouts, leading to a "deny and defend" response. Several factors initiated a paradigm shift. In the early 2000s, reports from the Institute of Medicine transformed the way the health care profession conceptualized patient safety.1 The imperative became creating cultures of safety that encouraged everyone to report errors to enable learning and foster more reliable systems. Transparency assumed greater importance, since you cannot fix problems you don't know about. The ethical imperative for openness was further supported when rising consumerism made it clear that patients expected responses to harm to include disclosure of what happened, an apology, reconciliation, and organizational learning.

(cut)

CRP Model for Responding to Harmful Medical Errors

Research has been critical to CRP expansion. Several studies have demonstrated that CRPs can enjoy physician support and operate without increasing liability risk. Nonetheless, research also shows that physicians remain concerned about their ability to communicate with patients and families after a harmful error and worry about liability risks including being sued, having their malpractice premiums raised, and having the event reported to the National Practitioner Data Bank (NPDB).5 Successful CRPS typically deploy a formal team, prioritize clinician and leadership buy-in, and engage liability insurers in their efforts. The table details the steps associated with the CRP model, the ethical rationale for each step, barriers to implementation, and strategies for overcoming them.

The growth of CRPs also reflects collaboration among diverse stakeholder groups, including patient advocates, health care organizations, plaintiff and defense attorneys, liability insurers, state medical associations, and legislators. Sustained stakeholder engagement that respects the diverse perspectives of each group has been vital, given the often opposing views these groups have espoused.
As CRPS proliferate, it will be important to address a few key challenges and open questions in implementing this ethical paradigm.


The article provides a number of recommendations for how healthcare providers can implement these principles. These include:
  • Developing open and honest communication with patients.
  • Providing timely and accurate information about the error.
  • Offering apologies and expressing empathy for the harm that has been caused.
  • Working with patients to develop a plan to address the consequences of the error.
  • Conducting a thorough investigation of the error to identify the root causes and prevent future errors.
  • Sharing the results of the investigation with patients and the public.

Tuesday, December 19, 2023

Human bias in algorithm design

Morewedge, C.K., Mullainathan, S., Naushan, H.F. et al.
Nat Hum Behav 7, 1822–1824 (2023).

Here is how the article starts:

Algorithms are designed to learn user preferences by observing user behaviour. This causes algorithms to fail to reflect user preferences when psychological biases affect user decision making. For algorithms to enhance social welfare, algorithm design needs to be psychologically informed.Many people believe that algorithms are failing to live up to their prom-ise to reflect user preferences and improve social welfare. The problem is not technological. Modern algorithms are sophisticated and accurate. Training algorithms on unrepresentative samples contributes to the problem, but failures happen even when algorithms are trained on the population. Nor is the problem caused only by the profit motive. For-profit firms design algorithms at a cost to users, but even non-profit organizations and governments fall short.

All algorithms are built on a psychological model of what the user is doing. The fundamental constraint on this model is the narrowness of the measurable variables for algorithms to predict. We suggest that algorithms fail to reflect user preferences and enhance their welfare because algorithms rely on revealed preferences to make predictions. Designers build algorithms with the erroneous assumption that user behaviour (revealed preferences) tells us (1) what users rationally prefer (normative preferences) and (2) what will enhance user welfare. Reliance on this 95-year-old economic model, rather than the more realistic assumption that users exhibit bounded rationality, leads designers to train algorithms on user behaviour. Revealed preferences can identify unknown preferences, but revealed preferences are an incomplete — and at times misleading — measure of the normative preferences and values of users. It is ironic that modern algorithms are built on an outmoded and indefensible commitment to revealed preferences.


Here is my summary.

Human biases can be reflected in algorithms, leading to unintended discriminatory outcomes. The authors argue that algorithms are not simply objective tools, but rather embody the values and assumptions of their creators. They highlight the importance of considering psychological factors when designing algorithms, as human behavior is often influenced by biases. To address this issue, the authors propose a framework for developing psychologically informed algorithms that can better capture user preferences and enhance social welfare. They emphasize the need for a more holistic approach to algorithm design that goes beyond technical considerations and takes into account the human element.

Monday, July 24, 2023

How AI can distort human beliefs

Kidd, C., & Birhane, A. (2023, June 23).
Science, 380(6651), 1222-1223.
doi:10.1126/science. adi0248

Here is an excerpt:

Three core tenets of human psychology can help build a bridge of understanding about what is at stake when discussing regulation and policy options. These ideas in psychology can connect to machine learning but also those in political science, education, communication, and the other fields that are considering the impact of bias and misinformation on population-level beliefs.

People form stronger, longer-lasting beliefs when they receive information from agents that they judge to be confident and knowledgeable, starting in early childhood. For example, children learned better when they learned from an agent who asserted their knowledgeability in the domain as compared with one who did not (5). That very young children track agents’ knowledgeability and use it to inform their beliefs and exploratory behavior supports the theory that this ability reflects an evolved capacity central to our species’ knowledge development.

Although humans sometimes communicate false or biased information, the rate of human errors would be an inappropriate baseline for judging AI because of fundamental differences in the types of exchanges between generative AI and people versus people and people. For example, people regularly communicate uncertainty through phrases such as “I think,” response delays, corrections, and speech disfluencies. By contrast, generative models unilaterally generate confident, fluent responses with no uncertainty representations nor the ability to communicate their absence. This lack of uncertainty signals in generative models could cause greater distortion compared with human inputs.

Further, people assign agency and intentionality readily. In a classic study, people read intentionality into the movements of simple animated geometric shapes (6). Likewise, people commonly read intentionality— and humanlike intelligence or emergent sentience—into generative models even though these attributes are unsubstantiated (7). This readiness to perceive generative models as knowledgeable, intentional agents implies a readiness to adopt the information that they provide more rapidly and with greater certainty. This tendency may be further strengthened because models support multimodal interactions that allow users to ask models to perform actions like “see,” “draw,” and “speak” that are associated with intentional agents. The potential influence of models’ problematic outputs on human beliefs thus exceeds what is typically observed for the influence of other forms of algorithmic content suggestion such as search. These issues are exacerbated by financial and liability interests incentivizing companies to anthropomorphize generative models as intelligent, sentient, empathetic, or even childlike.


Here is a summary of solutions that can be used to address the problem of AI-induced belief distortion. These solutions include:

Transparency: AI models should be transparent about their biases and limitations. This will help people to understand the limitations of AI models and to be more critical of the information that they generate.

Education: People should be educated about the potential for AI models to distort beliefs. This will help people to be more aware of the risks of using AI models and to be more critical of the information that they generate.

Regulation: Governments could regulate the use of AI models to ensure that they are not used to spread misinformation or to reinforce existing biases.

Sunday, July 2, 2023

Predictable, preventable medical errors kill thousands yearly. Is it getting any better?

Karen Weintraub
USAToday.com
Originally posted 3 May 23

Here are two excerpts:

A 2017 study put the figure at over 250,000 a year, making medical errors the nation's third leading cause of death at the time. There are no more recent figures.

But the pandemic clearly worsened patient safety, with Leapfrog's new assessment showing increases in hospital-acquired infections, including urinary tract and drug-resistant staph infections as well as infections in central lines ‒ tubes inserted into the neck, chest, groin, or arm to rapidly provide fluids, blood or medications. These infections spiked to a 5-year high during the pandemic and remain high.

"Those are really terrible declines in performance," Binder said.

Patient safety: 'I've never ever, ever seen that'

Not all patient safety news is bad. In one study published last year, researchers examined records from 190,000 patients discharged from hospitals nationwide after being treated for a heart attack, heart failure, pneumonia or major surgery. Patients saw far fewer bad events following treatment for those four conditions, as well as for adverse events caused by medications, hospital-acquired infections, and other factors.

It was the first study of patient safety that left Binder optimistic. "This was improvement and I've never ever, ever seen that," she said.

(cut)

On any given day now, 1 of every 31 hospitalized patients acquires an infection while hospitalized, according to a recent study from the Centers for Disease Control and Prevention. This costs health care systems at least $28.4 billion each year and accounts for an additional $12.4 billion from lost productivity and premature deaths.

"That blew me away," said Shaunte Walton, system director of Clinical Epidemiology & Infection Prevention at UCLA Health. Electronic tools can help, but even with them, "there's work to do to try to operationalize them," she said.

The patient experience also slipped during the pandemic. According to Leapfrog's latest survey, patients reported declines in nurse communication, doctor communication, staff responsiveness, communication about medicine and discharge information.

Boards and leadership teams are "highly distracted" right now with workforce shortages, new payment systems, concerns about equity and decarbonization, said Dr. Donald Berwick, president emeritus and senior fellow at the Institute for Healthcare Improvement and former administrator of the Centers for Medicare & Medicaid Services.

Tuesday, June 15, 2021

Diagnostic Mistakes a Big Contributor to Malpractice Suits, Study Finds

Joyce Friedan
MedPageToday.com
Originally posted 26 May 21

Here are two excerpts

One problem is that "healthcare is inherently risky," she continued. For example, "there's ever-changing industry knowledge, growing bodies of clinical options, new diseases, and new technology. There are variable work demands -- boy, didn't we experience that this past year! -- and production pressure has long been a struggle and a challenge for our providers and their teams." Not to mention variable individual competency, an aging population, complex health issues, and evolving workforces.

(cut)

Cognitive biases can also trigger diagnostic errors, Siegal said. "Anchor bias" occurs when "a provider anchors on a diagnosis, early on, and then through the course of the journey looks for things to confirm that diagnosis. Once they've confirmed it enough that 'search satisfaction' is met, that leads to premature closure" of the patient's case. But that causes a problem because "it means that there's a failure to continue exploring other options. What else could it be? It's a failure to establish, perhaps, every differential diagnosis."

To avoid this problem, providers "always want to think about, 'Am I anchoring too soon? Am I looking to confirm, rather than challenge, my diagnosis?'" she said. According to the study, 25% of cases didn't have evidence of a differential diagnosis, and 36% fell into the category of "confirmation bias" -- "I was looking for things to confirm what I knew, but there were relevant signs and symptoms or positive tests that were still present that didn't quite fit the picture, but it was close. So they were somehow discounted, and the premature closure took over and a diagnosis was made," she said.

She suggested that clinicians take a "diagnostic timeout" -- similar to a surgical timeout -- when they're arriving at a diagnosis. "What else could this be? Have I truly explored all the other possibilities that seem relevant in this scenario and, more importantly, what doesn't fit? Be sure to dis-confirm as well."

Monday, June 14, 2021

Bias Is a Big Problem. But So Is ‘Noise.’

Daniel Kahneman, O. Sibony & C.R. Sunstein
The New York Times
Originally posted 15 May 21

Here is an excerpt:

There is much evidence that irrelevant circumstances can affect judgments. In the case of criminal sentencing, for instance, a judge’s mood, fatigue and even the weather can all have modest but detectable effects on judicial decisions.

Another source of noise is that people can have different general tendencies. Judges often vary in the severity of the sentences they mete out: There are “hanging” judges and lenient ones.

A third source of noise is less intuitive, although it is usually the largest: People can have not only different general tendencies (say, whether they are harsh or lenient) but also different patterns of assessment (say, which types of cases they believe merit being harsh or lenient about). 

Underwriters differ in their views of what is risky, and doctors in their views of which ailments require treatment. 

We celebrate the uniqueness of individuals, but we tend to forget that, when we expect consistency, uniqueness becomes a liability.

Once you become aware of noise, you can look for ways to reduce it. For instance, independent judgments from a number of people can be averaged (a frequent practice in forecasting). 

Guidelines, such as those often used in medicine, can help professionals reach better and more uniform decisions. 

As studies of hiring practices have consistently shown, imposing structure and discipline in interviews and other forms of assessment tends to improve judgments of job candidates.

No noise-reduction techniques will be deployed, however, if we do not first recognize the existence of noise. 

Noise is too often neglected. But it is a serious issue that results in frequent error and rampant injustice. 

Organizations and institutions, public and private, will make better decisions if they take noise seriously.

Thursday, March 19, 2020

Responding to Unprofessional Behavior by Trainees — A “Just Culture” Framework

J. A. Wasserman, M. Redinger, and T. Gibb
New England Journal of Medicine
February 20, 2020
doi: 10.1056/NEJMms1912591

Professionalism lapses by trainees can be addressed productively if viewed through a lens of medical error, drawing on “just culture” principles. With this approach, educators can promote a formative learning environment while fairly addressing problematic behaviors.

Addressing lapses in professionalism is critical to professional development. Yet characterizing the ways in which the behavior of emerging professionals may fall short and responding to those behaviors remain difficult.

Catherine Lucey suggests that we “consider professionalism lapses to be either analogous to or a form of medical error,” in order to create “a ‘just environment’ in which people are encouraged to report professionalism challenges, lapses, and near misses.” Applying a framework of medical error promotes an understanding of professionalism as a set of skills whose acquisition requires a psychologically safe learning environment.

 Lucey and Souba also note that professionalism sometimes requires one to act counter to one’s other interests and motivations (e.g., to subordinate one’s own interests to those of others); the skills required to navigate such dilemmas must be acquired over time, and therefore trainees’ behavior will inevitably sometimes fall short.

We believe that lapses in professional behavior can be addressed productively if we view them through this lens of medical error, drawing on “just culture” principles and related procedural approaches.

(cut)

The Just Culture Approach

Thanks to a movement catalyzed by an Institute of Medicine report, error reduction has become a priority of health systems over the past two decades. Their efforts have involved creating a “culture of psychological safety” that allows for open dialogue, dissent, and transparent reporting. Early iterations involved “blame free” approaches, which have increasingly given way to an emphasis on balancing individual and system accountability.

Drawing on these just culture principles, a popular approach for defining and responding to medical error recognizes the qualitative differences among inadvertent human error, at-risk behavior, and reckless behavior (the Institute for Safe Medication Practices also provides an excellent elaboration of these categories).

“Inadvertent human errors” result from suboptimal individual functioning, but without intention or the knowledge that a behavior is wrong or error-prone (e.g., an anesthesiologist inadvertently grabbing a paralyzing agent instead of a reversal agent). These errors are not considered blameworthy, and proper response involves consolation and assessment of systemic changes to prevent them in the future.

Wednesday, March 4, 2020

How Common Mental Shortcuts Can Cause Major Physician Errors

Anupam B. Jena and Andrew R. Olenski
The New York Times
Originally posted 20 Feb 20

Here is an excerpt:

In health care, such unconscious biases can lead to disparate treatment of patients and can affect whether similar patients live or die.

Sometimes these cognitive biases are simple overreactions to recent events, what psychologists term availability bias. One study found that when patients experienced an unlikely adverse side effect of a drug, their doctor was less likely to order that same drug for the next patient whose condition might call for it, even though the efficacy and appropriateness of the drug had not changed.

A similar study found that when mothers giving birth experienced an adverse event, their obstetrician was more likely to switch delivery modes for the next patient (C-section vs. vaginal delivery), regardless of the appropriateness for that next patient. This cognitive bias resulted in both higher spending and worse outcomes.

Doctor biases don’t affect treatment decisions alone; they can shape the profession as a whole. A recent study analyzed gender bias in surgeon referrals and found that when the patient of a female surgeon dies, the physician who made the referral to that surgeon sends fewer patients to all female surgeons in the future. The study found no such decline in referrals for male surgeons after a patient death.

This list of biases is far from exhaustive, and though they may be disconcerting, uncovering new systematic mistakes is critical for improving clinical practice.

The info is here.

Wednesday, January 22, 2020

Association Between Physician Depressive Symptoms and Medical Errors

Pereira-Lima K, Mata DA, & others
JAMA Netw Open. 2019; 2(11):e1916097

Abstract

Importance  Depression is highly prevalent among physicians and has been associated with increased risk of medical errors. However, questions regarding the magnitude and temporal direction of these associations remain open in recent literature.

Objective  To provide summary relative risk (RR) estimates for the associations between physician depressive symptoms and medical errors.

Conclusions and Relevance  Results of this study suggest that physicians with a positive screening for depressive symptoms are at higher risk for medical errors. Further research is needed to evaluate whether interventions to reduce physician depressive symptoms could play a role in mitigating medical errors and thus improving physician well-being and patient care.

From the Discussion

Studies have recommended the addition of physician well-being to the Triple Aim of enhancing the patient experience of care, improving the health of populations, and reducing the per capita cost of health care. Results of the present study endorse the Quadruple Aim movement by demonstrating not only that medical errors are associated with physician health but also that physician depressive symptoms are associated with subsequent errors. Given that few physicians with depression seek treatment and that recent evidence has pointed to the lack of organizational interventions aimed at reducing physician depressive symptoms, our findings underscore the need for institutional policies to remove barriers to the delivery of evidence-based treatment to physicians with depression.

https://doi.org/10.1001/jamanetworkopen.2019.16097

Tuesday, December 24, 2019

DNA genealogical databases are a gold mine for police, but with few rules and little transparency

Paige St. John
The LA Times
Originally posted 24 Nov 19

Here is an excerpt:

But law enforcement has plunged into this new world with little to no rules or oversight, intense secrecy and by forming unusual alliances with private companies that collect the DNA, often from people interested not in helping close cold cases but learning their ethnic origins and ancestry.

A Times investigation found:
  • There is no uniform approach for when detectives turn to genealogical databases to solve cases. In some departments, they are to be used only as a last resort. Others are putting them at the center of their investigative process. Some, like Orlando, have no policies at all.
  • When DNA services were used, law enforcement generally declined to provide details to the public, including which companies detectives got the match from. The secrecy made it difficult to understand the extent to which privacy was invaded, how many people came under investigation, and what false leads were generated.
  • California prosecutors collaborated with a Texas genealogy company at the outset of what became a $2-million campaign to spotlight the heinous crimes they can solve with consumer DNA. Their goal is to encourage more people to make their DNA available to police matching.
There are growing concerns that the race to use genealogical databases will have serious consequences, from its inherent erosion of privacy to the implications of broadened police power.

In California, an innocent twin was thrown in jail. In Georgia, a mother was deceived into incriminating her son. In Texas, police met search guidelines by classifying a case as sexual assault but after an arrest only filed charges of burglary. And in the county that started the DNA race with the arrest of the Golden State killer suspect, prosecutors have persuaded a judge to treat unsuspecting genetic contributors as “confidential informants” and seal searches so consumers are not scared away from adding their own DNA to the forensic stockpile.

Friday, November 1, 2019

What Clinical Ethics Can Learn From Decision Science

Michele C. Gornick and Brian J. Zikmund-Fisher
AMA J Ethics. 2019;21(10):E906-912.
doi: 10.1001/amajethics.2019.906.

Abstract

Many components of decision science are relevant to clinical ethics practice. Decision science encourages thoughtful definition of options, clarification of information needs, and acknowledgement of the heterogeneity of people’s experiences and underlying values. Attention to decision-making processes reminds participants in consultations that how decisions are made and how information is provided can change a choice. Decision science also helps reveal affective forecasting errors (errors in predictions about how one will feel in a future situation) that happen when people consider possible future health states and suggests strategies for correcting these and other kinds of biases. Implementation of decision science innovations is not always feasible or appropriate in ethics consultations, but their uses increase the likelihood that an ethics consultation process will generate choices congruent with patients’ and families’ values.

Here is an excerpt:

Decision Science in Ethics Practice

Clinical ethicists can support informed, value-congruent decision making in ethically complex clinical situations by working with stakeholders to identify and address biases and the kinds of barriers just discussed. Doing so requires constantly comparing actual decision-making processes with ideal decision-making processes, responding to information deficits, and integrating stakeholder values. One key step involves regularly urging clinicians to clarify both available options and possible outcomes and encouraging patients to consider both their values and the possible meanings of different outcomes.

Saturday, March 30, 2019

AI Safety Needs Social Scientists

Geoffrey Irving and Amanda Askell
distill.pub
Originally published February 19, 2019

Here is an excerpt:

Learning values by asking humans questions

We start with the premise that human values are too complex to describe with simple rules. By “human values” we mean our full set of detailed preferences, not general goals such as “happiness” or “loyalty”. One source of complexity is that values are entangled with a large number of facts about the world, and we cannot cleanly separate facts from values when building ML models. For example, a rule that refers to “gender” would require an ML model that accurately recognizes this concept, but Buolamwini and Gebru found that several commercial gender classifiers with a 1% error rate on white men failed to recognize black women up to 34% of the time. Even where people have correct intuition about values, we may be unable to specify precise rules behind these intuitions. Finally, our values may vary across cultures, legal systems, or situations: no learned model of human values will be universally applicable.

If humans can’t reliably report the reasoning behind their intuitions about values, perhaps we can make value judgements in specific cases. To realize this approach in an ML context, we ask humans a large number of questions about whether an action or outcome is better or worse, then train on this data. “Better or worse” will include both factual and value-laden components: for an AI system trained to say things, “better” statements might include “rain falls from clouds”, “rain is good for plants”, “many people dislike rain”, etc. If the training works, the resulting ML system will be able to replicate human judgement about particular situations, and thus have the same “fuzzy access to approximate rules” about values as humans. We also train the ML system to come up with proposed actions, so that it knows both how to perform a task and how to judge its performance. This approach works at least in simple cases, such as Atari games and simple robotics tasks and language-specified goals in gridworlds. The questions we ask change as the system learns to perform different types of actions, which is necessary as the model of what is better or worse will only be accurate if we have applicable data to generalize from.

The info is here.

Monday, December 3, 2018

Our lack of interest in data ethics will come back to haunt us

Jayson Demers
thenextweb.com
Originally posted November 4, 2018

Here is an excerpt:

Most people understand the privacy concerns that can arise with collecting and harnessing big data, but the ethical concerns run far deeper than that.

These are just a smattering of the ethical problems in big data:

  • Ownership: Who really “owns” your personal data, like what your political preferences are, or which types of products you’ve bought in the past? Is it you? Or is it public information? What about people under the age of 18? What about information you’ve tried to privatize?
  • Bias: Biases in algorithms can have potentially destructive effects. Everything from facial recognition to chatbots can be skewed to favor one demographic over another, or one set of values over another, based on the data used to power it.
  • Transparency: Are companies required to disclose how they collect and use data? Or are they free to hide some of their efforts? More importantly, who gets to decide the answer here?
  • Consent: What does it take to “consent” to having your data harvested? Is your passive use of a platform enough? What about agreeing to multi-page, complexly worded Terms and Conditions documents?

If you haven’t heard about these or haven’t thought much about them, I can’t really blame you. We aren’t bringing these questions to the forefront of the public discussion on big data, nor are big data companies going out of their way to discuss them before issuing new solutions.

“Oops, our bad”

One of the biggest problems we keep running into is what I call the “oops, our bad” effect. The idea here is that big companies and data scientists use and abuse data however they want, outside the public eye and without having an ethical discussion about their practices. If and when the public finds out that some egregious activity took place, there’s usually a short-lived public outcry, and the company issues an apology for the actions — without really changing their practices or making up for the damage.

The info is here.

Friday, July 20, 2018

The Psychology of Offering an Apology: Understanding the Barriers to Apologizing and How to Overcome Them

Karina Schumann
Current Directions in Psychological Science 
Vol 27, Issue 2, pp. 74 - 78
First Published March 8, 2018

Abstract

After committing an offense, a transgressor faces an important decision regarding whether and how to apologize to the person who was harmed. The actions he or she chooses to take after committing an offense can have dramatic implications for the victim, the transgressor, and their relationship. Although high-quality apologies are extremely effective at promoting reconciliation, transgressors often choose to offer a perfunctory apology, withhold an apology, or respond defensively to the victim. Why might this be? In this article, I propose three major barriers to offering high-quality apologies: (a) low concern for the victim or relationship, (b) perceived threat to the transgressor’s self-image, and (c) perceived apology ineffectiveness. I review recent research examining how these barriers affect transgressors’ apology behavior and describe insights this emerging work provides for developing methods to move transgressors toward more reparative behavior. Finally, I discuss important directions for future research.

The article is here.

Wednesday, June 13, 2018

The Burnout Crisis in American Medicine

Rena Xu
The Atlantic
Originally published May 11, 2018

Here is an excerpt:

In medicine, burned-out doctors are more likely to make medical errors, work less efficiently, and refer their patients to other providers, increasing the overall complexity (and with it, the cost) of care. They’re also at high risk of attrition: A survey of nearly 7,000 U.S. physicians, published last year in the Mayo Clinic Proceedings, reported that one in 50 planned to leave medicine altogether in the next two years, while one in five planned to reduce clinical hours over the next year. Physicians who self-identified as burned out were more likely to follow through on their plans to quit.

What makes the burnout crisis especially serious is that it is hitting us right as the gap between the supply and demand for health care is widening: A quarter of U.S. physicians are expected to retire over the next decade, while the number of older Americans, who tend to need more health care, is expected to double by 2040. While it might be tempting to point to the historically competitive rates of medical-school admissions as proof that the talent pipeline for physicians won’t run dry, there is no guarantee. Last year, for the first time in at least a decade, the volume of medical school applications dropped—by nearly 14,000, according to data from the Association of American Medical Colleges. By the association’s projections, we may be short 100,000 physicians or more by 2030.

The article is here.

Friday, May 11, 2018

AI experts want government algorithms to be studied like environmental hazards

Dave Gershgorn
Quartz (www.qz.com)
Originally published April 9, 2018

Artificial intelligence experts are urging governments to require assessments of AI implementation that mimic the environmental impact reports now required by many jurisdictions.

AI Now, a nonprofit founded to study the societal impacts of AI, said an algorithmic impact assessment (AIA) would assure that the public and governments understand the scope, capability, and secondary impacts an algorithm could have, and people could voice concerns if an algorithm was behaving in a biased or unfair way.

“If governments deploy systems on human populations without frameworks for accountability, they risk losing touch with how decisions have been made, thus rendering them unable to know or respond to bias, errors, or other problems,” the report said. “The public will have less insight into how agencies function, and have less power to question or appeal decisions.”

The information is here.

Friday, March 23, 2018

Mark Zuckerberg Has No Way Out of Facebook's Quagmire

Leonid Bershidsky
Bloomberg News
Originally posted March 21, 2018

Here is an excerpt:

"Making sure time spent on Facebook is time well spent," as Zuckerberg puts it, should lead to the collection of better-quality data. If nobody is setting up fake accounts to spread disinformation, users are more likely to be their normal selves. Anyone analyzing these healthier interactions will likely have more success in targeting commercial and, yes, political offerings to real people. This would inevitably be a smaller yet still profitable enterprise, and no longer a growing one, at least in the short term. But the Cambridge Analytica scandal shows people may not be okay with Facebook's data gathering, improved or not.

The scandal follows the revelation (to most Facebook users who read about it) that, until 2015, application developers on the social network's platform were able to get information about a user's Facebook friends after asking permission in the most perfunctory way. The 2012 Obama campaign used this functionality. So -- though in a more underhanded way -- did Cambridge Analytica, which may or may not have used the data to help elect President Donald Trump.

Many people are angry at Facebook for not acting more resolutely to prevent CA's abuse, but if that were the whole problem, it would have been enough for Zuckerberg to apologize and point out that the offending functionality hasn't been available for several years. The #deletefacebook campaign -- now backed by WhatsApp co-founder Brian Acton, whom Facebook made a billionaire -- is, however, powered by a bigger problem than that. People are worried about the data Facebook is accumulating about them and about how these data are used. Facebook itself works with political campaigns to help them target messages; it did so for the Trump campaign, too, perhaps helping it more than CA did.

The article is here.

First Question: Should you stop using Facebook because they violated your trust?

Second Question: Is Facebook a defective product?

Wednesday, August 9, 2017

Career of the Future: Robot Psychologist

Christopher Mims
The Wall Street Journal
Originally published July 9, 2017

Artificial-intelligence engineers have a problem: They often don’t know what their creations are thinking.

As artificial intelligence grows in complexity and prevalence, it also grows more powerful. AI already has factored into decisions about who goes to jail and who receives a loan. There are suggestions AI should determine who gets the best chance to live when a self-driving car faces an unavoidable crash.

Defining AI is slippery and growing more so, as startups slather the buzzword over whatever they are doing. It is generally accepted as any attempt to ape human intelligence and abilities.

One subset that has taken off is neural networks, systems that “learn” as humans do through training, turning experience into networks of simulated neurons. The result isn’t code, but an unreadable, tangled mass of millions—in some cases billions—of artificial neurons, which explains why those who create modern AIs can be befuddled as to how they solve tasks.

Most researchers agree the challenge of understanding AI is pressing. If we don’t know how an artificial mind works, how can we ascertain its biases or predict its mistakes?

We won’t know in advance if an AI is racist, or what unexpected thought patterns it might have that would make it crash an autonomous vehicle. We might not know about an AI’s biases until long after it has made countless decisions. It’s important to know when an AI will fail or behave unexpectedly—when it might tell us, “I’m sorry, Dave. I’m afraid I can’t do that.”

“A big problem is people treat AI or machine learning as being very neutral,” said Tracy Chou, a software engineer who worked with machine learning at Pinterest Inc. “And a lot of that is people not understanding that it’s humans who design these models and humans who choose the data they are trained on.”

The article is here.