Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Medicine. Show all posts
Showing posts with label Medicine. Show all posts

Saturday, October 21, 2023

Should Trackable Pill Technologies Be Used to Facilitate Adherence Among Patients Without Insight?

Tahir Rahman
AMA J Ethics. 2019;21(4):E332-336.
doi: 10.1001/amajethics.2019.332.

Abstract

Aripiprazole tablets with sensor offer a new wireless trackable form of aripiprazole that represents a clear departure from existing drug delivery systems, routes, or formulations. This tracking technology raises concerns about the ethical treatment of patients with psychosis when it could introduce unintended treatment challenges. The use of “trackable” pills and other “smart” drugs or nanodrugs assumes renewed importance given that physicians are responsible for determining patients’ decision-making capacity. Psychiatrists are uniquely positioned in society to advocate on behalf of vulnerable patients with mental health disorders. The case presented here focuses on guidance for capacity determination and informed consent for such nanodrugs.

(cut)

Ethics and Nanodrug Prescribing

Clinicians often struggle with improving treatment adherence in patients with psychosis who lack insight and decision-making capacity, so trackable nanodrugs, even though not proven to improve compliance, are worth considering. At the same time, guidelines are lacking to help clinicians determine which patients are appropriate for trackable nanodrug prescribing. The introduction of an actual tracking device in a patient who suffers from delusions of an imagined tracking device, like Mr A, raises specific ethical concerns. Clinicians have widely accepted the premise that confronting delusions is countertherapeuti The introduction of trackable pill technology could similarly introduce unintended harms. Paul Appelbaum has argued that “with paranoid patients often worried about being monitored or tracked, giving them a pill that does exactly that is an odd approach to treatment. The fear of invasion of privacy might discourage some patients from being compliant with their medical care and thus foster distrust of all psychiatric services. A good therapeutic relationship (often with family, friends, or a guardian involved) is critical to the patient’s engaging in ongoing psychiatric services.

The use of trackable pill technology to improve compliance deserves further scrutiny, as continued reliance on informal, physician determinations of decision-making capacity remain a standard practice. Most patients are not yet accustomed to the idea of ingesting a trackable pill. Therefore, explanation of the intervention must be incorporated into the informed consent process, assuming the patient has decision-making capacity. Since patients may have concerns about the collected data being stored on a device, clinicians might have to answer questions regarding potential breaches of confidentiality. They will also have to contend with clinical implications of acquiring patient treatment compliance data and justifying decisions based on such information. Below is a practical guide to aid clinicians in appropriate use of this technology.

Friday, October 28, 2022

Gender and ethnicity bias in medicine: a text analysis of 1.8 million critical care records

David M Markowitz
PNAS Nexus, Volume 1, Issue 4,
September 2022, pg157

Abstract

Gender and ethnicity biases are pervasive across many societal domains including politics, employment, and medicine. Such biases will facilitate inequalities until they are revealed and mitigated at scale. To this end, over 1.8 million caregiver notes (502 million words) from a large US hospital were evaluated with natural language processing techniques in search of gender and ethnicity bias indicators. Consistent with nonlinguistic evidence of bias in medicine, physicians focused more on the emotions of women compared to men and focused more on the scientific and bodily diagnoses of men compared to women. Content patterns were relatively consistent across genders. Physicians also attended to fewer emotions for Black/African and Asian patients compared to White patients, and physicians demonstrated the greatest need to work through diagnoses for Black/African women compared to other patients. Content disparities were clearer across ethnicities, as physicians focused less on the pain of Black/African and Asian patients compared to White patients in their critical care notes. This research provides evidence of gender and ethnicity biases in medicine as communicated by physicians in the field and requires the critical examination of institutions that perpetuate bias in social systems.

Significance Statement

Bias manifests in many social systems, including education, policing, and politics. Gender and ethnicity biases are also common in medicine, though empirical investigations are often limited to small-scale, qualitative work that fails to leverage data from actual patient–physician records. The current research evaluated over 1.8 million caregiver notes and observed patterns of gender and ethnicity bias in language. In these notes, physicians focused more on the emotions of women compared to men, and physicians focused less on the emotions of Black/African patients compared to White patients. These patterns are consistent with other work investigating bias in medicine, though this study is among the first to document such disparities at the language level and at a massive scale.

From the Discussion Section

This evidence is important because it establishes a link between communication patterns and bias that is often unobserved or underexamined in medicine. Bias in medicine has been predominantly revealed through procedural differences among ethnic groups, how patients of different ethnicities perceive their medical treatment, and structures that are barriers-to-entry for women and ethnic minorities. The current work revealed that the language found in everyday caregiver notes reflects disparities and indications of bias—new pathways that can complement other approaches to signal physicians who treat patients inequitably. Caregiver notes, based on their private nature, are akin to medical diaries for physicians as they attend to patients, logging the thoughts, feelings, and diagnoses of medical professionals. Caregivers have the herculean task of tending to those in need, though the current evidence suggests bias and language-based disparities are a part of this system. 

Wednesday, September 21, 2022

Professional Civil Disobedience — Medical-Society Responsibilities after Dobbs

Matthew K. Wynia
The New England Journal of Medicine
September 15, 2022, 387:959-961

Here are two excerpts:

The AMA called Dobbs “an egregious allowance of government intrusion into the medical examination room, a direct attack on the practice of medicine and the patient–physician relationship, and a brazen violation of patients’ rights to evidence-based reproductive health services.” The American Academy of Family Physicians wrote that the decision “negatively impacts our practices and our patients by undermining the patient–physician relationship and potentially criminalizing evidence-based medical care.” The American College of Physicians stated, “A patient’s decision about whether to continue a pregnancy should be a private decision made in consultation with a physician or other health care professional, without interference from the government.” And the CEO of the American College of Obstetricians and Gynecologists called Dobbs “tragic” for patients, “the boldest act of legislative interference that we have seen in this country,” and “an affront to all that drew my colleagues and me into medicine.”

Medical organizations are rarely so united. Yet even many physicians who oppose abortion recognize that medically nuanced decisions are best left in the hands of individual patients and their physicians — not state lawmakers. Abortion bans are already pushing physicians in some states to wait until patients become critically ill before intervening in cases of ectopic pregnancy or septic miscarriage, among other problems.

Beyond issuing strongly worded statements, what actions should medical organizations take in the face of laws that threaten patients’ well-being? Should they support establishing committees to decide when a pregnant person’s life is in sufficient danger to warrant an abortion? Should they advocate for allowing patients to travel elsewhere for care? Or should they encourage their members to provide evidence-based medical care, even if doing so means accepting — en masse — fines, suspensions of licensure, and potential imprisonment? How long could a dangerous state law survive if the medical profession, as a whole, refused to be intimidated into harming patients, even if such a refusal meant that many physicians might go to jail?

(cut)

Proposing professional civil disobedience of state laws prohibiting abortion might seem naive. Historically, physicians have rarely been radical, and most have conformed with bad laws and policies, even horrific ones — such as those authorizing forced-sterilization programs in the United States and Nazi Germany, the use of psychiatric hospitals as political prisons in the Soviet Union, and police brutality under apartheid in South Africa. Too often, organized medicine has failed to fulfill its duty to protect patients when doing so required acting against state authority. Although there are many examples of courageous individual physicians defying unjust laws or regulations, examples of open support for these physicians by their professional associations — such as the AMA’s offer to support physicians who refused to be involved in “enhanced” interrogations (i.e., torture) during the Iraq War — are uncommon. And profession-wide civil disobedience — such as Dutch physicians choosing to collectively turn in their licenses rather than practice under Nazi rule — is rare.

Sunday, August 28, 2022

Dr. Oz Shouldn’t Be a Senator—or a Doctor

Timothy Caulfield
Scientific American
Originally posted 15 DEC 21

While holding a medical license, Mehmet Oz, widely known as Dr. Oz, has long pushed misleading, science-free and unproven alternative therapies such as homeopathy, as well as fad diets, detoxes and cleanses. Some of these things have been potentially harmful, including hydroxychloroquine, which he once touted would be beneficial in the treatment or prevention of COVID. This assertion has been thoroughly debunked.

He’s built a tremendous following around his lucrative but evidence-free advice. So, are we surprised that Oz is running as a Republican for the U.S. Senate in Pennsylvania? No, we are not. Misinformation-spouting celebrities seem to be a GOP favorite. This move is very on brand for both Oz and the Republican Party.

His candidacy is a reminder that tolerating and/or enabling celebrity pseudoscience (I’m thinking of you, Oprah Winfrey!) can have serious and enduring consequences. Much of Oz’s advice was bunk before the pandemic, it is bunk now, and there is no reason to assume it won’t be bunk after—even if he becomes Senator Oz. Indeed, as Senator Oz, it’s all but guaranteed he would bring pseudoscience to the table when crafting and voting on legislation that affects the health and welfare of Americans.

As viewed by someone who researches the spread of health misinformation, Oz’s candidacy remains deeply grating in that “of course he is” kind of way. But it is also an opportunity to highlight several realities about pseudoscience, celebrity physicians and the current regulatory environment that allows people like him to continue to call themselves doctor.

Before the pandemic I often heard people argue that the wellness woo coming from celebrities like Gwyneth Paltrow, Tom Brady and Oz was mostly harmless noise. If people want to waste their money on ridiculous vagina eggs, bogus diets or unproven alternative remedies, why should we care? Buyer beware, a fool and their money, a sucker is born every minute, etc., etc.

But we know, now more than ever, that pop culture can—for better or worse—have a significant impact on health beliefs and behaviors. Indeed, one need only consider the degree to which Jenny McCarthy gave life to the vile claim that autism is linked to vaccination. Celebrity figures like podcast host Joe Rogan and football player Aaron Rodgers have greatly added to the chaotic information regarding COVID-19 by magnifying unsupported claims.

Thursday, February 24, 2022

Robot performs first laparoscopic surgery without human help (and outperformed human doctors)

Johns Hopkins University. (2022, January 26). 
ScienceDaily. Retrieved January 28, 2022

A robot has performed laparoscopic surgery on the soft tissue of a pig without the guiding hand of a human -- a significant step in robotics toward fully automated surgery on humans. Designed by a team of Johns Hopkins University researchers, the Smart Tissue Autonomous Robot (STAR) is described today in Science Robotics.

"Our findings show that we can automate one of the most intricate and delicate tasks in surgery: the reconnection of two ends of an intestine. The STAR performed the procedure in four animals and it produced significantly better results than humans performing the same procedure," said senior author Axel Krieger, an assistant professor of mechanical engineering at Johns Hopkins' Whiting School of Engineering.

The robot excelled at intestinal anastomosis, a procedure that requires a high level of repetitive motion and precision. Connecting two ends of an intestine is arguably the most challenging step in gastrointestinal surgery, requiring a surgeon to suture with high accuracy and consistency. Even the slightest hand tremor or misplaced stitch can result in a leak that could have catastrophic complications for the patient.

Working with collaborators at the Children's National Hospital in Washington, D.C. and Jin Kang, a Johns Hopkins professor of electrical and computer engineering, Krieger helped create the robot, a vision-guided system designed specifically to suture soft tissue. Their current iteration advances a 2016 model that repaired a pig's intestines accurately, but required a large incision to access the intestine and more guidance from humans.

The team equipped the STAR with new features for enhanced autonomy and improved surgical precision, including specialized suturing tools and state-of-the art imaging systems that provide more accurate visualizations of the surgical field.

Soft-tissue surgery is especially hard for robots because of its unpredictability, forcing them to be able to adapt quickly to handle unexpected obstacles, Krieger said. The STAR has a novel control system that can adjust the surgical plan in real time, just as a human surgeon would.

Wednesday, August 4, 2021

A taxonomy of conscientious objection in healthcare

Gamble, N., & Saad, T. (2021). 
Clinical Ethics. 
https://doi.org/10.1177/1477750921994283

Abstract

Conscientious Objection (CO) has become a highly contested topic in the bioethics literature and public policy. However, when CO is discussed, it is almost universally referred to as a single entity. Reality reveals a more nuanced picture. Healthcare professionals may object to a given action on numerous grounds. They may oppose an action because of its ends, its means, or because of factors that lay outside of both ends and means. Our paper develops a taxonomy of CO, which makes it possible to describe the refusals of healthcare professional with greater finesse. The application of this development will potentially allow for greater subtlety in public policy and academic discussions – some species of CO could be permitted while others could be prohibited.

Conclusion

The ethical analysis and framework we have presented demonstrate that conscience is intertwined with practical wisdom and is an intrinsic part of the work of healthcare professionals. The species of CO we have enumerated reveal that morality and values in healthcare are not only related to a few controversial ends, but to all ends and means in medicine, and the relationships between them.

The taxonomy we have presented will feasibly permit a more nuanced discussion of CO, where the issues surrounding and policy solutions for each species of CO can be discussed separately. Such a conversation
is an important task. After all, CO will not go away, even if specific belief systems rise or fall. CO exists
because humans have an innate awareness of the need to seek good and avoid evil, yet still arrive at disparate intellectual conclusions about what is right and wrong. Thus, if tolerant and amicable solutions
are to be developed for CO, conversations on CO in healthcare need to continue with a more integrated
understanding of practical reason and an awareness of broad involvement of conscience in medicine. We
hope our paper contributes to this end.

Sunday, August 1, 2021

Understanding, explaining, and utilizing medical artificial intelligence

Cadario, R., Longoni, C. & Morewedge, C.K. 
Nat Hum Behav (2021). 
https://doi.org/10.1038/s41562-021-01146-0

Abstract

Medical artificial intelligence is cost-effective and scalable and often outperforms human providers, yet people are reluctant to use it. We show that resistance to the utilization of medical artificial intelligence is driven by both the subjective difficulty of understanding algorithms (the perception that they are a ‘black box’) and by an illusory subjective understanding of human medical decision-making. In five pre-registered experiments (1–3B: N = 2,699), we find that people exhibit an illusory understanding of human medical decision-making (study 1). This leads people to believe they better understand decisions made by human than algorithmic healthcare providers (studies 2A,B), which makes them more reluctant to utilize algorithmic than human providers (studies 3A,B). Fortunately, brief interventions that increase subjective understanding of algorithmic decision processes increase willingness to utilize algorithmic healthcare providers (studies 3A,B). A sixth study on Google Ads for an algorithmic skin cancer detection app finds that the effectiveness of such interventions generalizes to field settings (study 4: N = 14,013).

From the Discussion

Utilization of algorithmic-based healthcare services is becoming critical with the rise of telehealth service, the current surge in healthcare demand and long-term goals of providing affordable and high-quality healthcare in developed and developing nations. Our results yield practical insights for reducing reluctance to utilize medical AI. Because the technologies used in algorithmic-based medical applications are complex, providers tend to present AI provider decisions as a ‘black box’. Our results underscore the importance of recent policy recommendations to open this black box to patients and users. A simple one-page visual or sentence that explains the criteria or process used to make medical decisions increased acceptance of an algorithm-based skin cancer diagnostic tool, which could be easily adapted to other domains and procedures.

Given the complexity of the process by which medical AI makes decisions, firms now tend to emphasize the outcomes that algorithms produce in their marketing to consumers, which feature benefits such as accuracy, convenience and rapidity (performance), while providing few details about how algorithms work (process). Indeed, in an ancillary study examining the marketing of skin cancer smartphone applications (Supplementary Appendix 8), we find that performance-related keywords were used to describe 57–64% of the applications, whereas process-related keywords were used to describe 21% of the applications. Improving subjective understanding of how medical AI works may then not only provide beneficent insights for increasing consumer adoption but also for firms seeking to improve their positioning. Indeed, we find increased advertising efficacy for SkinVision, a skin cancer detection app, when advertising included language explaining how it works.

Friday, October 23, 2020

Ethical Dimensions of Using Artificial Intelligence in Health Care

Michael J. Rigby
AMA Journal of Ethics
February 2019

An artificially intelligent computer program can now diagnose skin cancer more accurately than a board-certified dermatologist. Better yet, the program can do it faster and more efficiently, requiring a training data set rather than a decade of expensive and labor-intensive medical education. While it might appear that it is only a matter of time before physicians are rendered obsolete by this type of technology, a closer look at the role this technology can play in the delivery of health care is warranted to appreciate its current strengths, limitations, and ethical complexities.

Artificial intelligence (AI), which includes the fields of machine learning, natural language processing, and robotics, can be applied to almost any field in medicine, and its potential contributions to biomedical research, medical education, and delivery of health care seem limitless. With its robust ability to integrate and learn from large sets of clinical data, AI can serve roles in diagnosis, clinical decision making, and personalized medicine. For example, AI-based diagnostic algorithms applied to mammograms are assisting in the detection of breast cancer, serving as a “second opinion” for radiologists. In addition, advanced virtual human avatars are capable of engaging in meaningful conversations, which has implications for the diagnosis and treatment of psychiatric disease. AI applications also extend into the physical realm with robotic prostheses, physical task support systems, and mobile manipulators assisting in the delivery of telemedicine.

Nonetheless, this powerful technology creates a novel set of ethical challenges that must be identified and mitigated since AI technology has tremendous capability to threaten patient preference, safety, and privacy. However, current policy and ethical guidelines for AI technology are lagging behind the progress AI has made in the health care field. While some efforts to engage in these ethical conversations have emerged, the medical community remains ill informed of the ethical complexities that budding AI technology can introduce. Accordingly, a rich discussion awaits that would greatly benefit from physician input, as physicians will likely be interfacing with AI in their daily practice in the near future.

Thursday, May 21, 2020

Discussing the ethics of hydroxychloroquine prescriptions for COVID-19 prevention

Sharon Yoo
KARE11.com
Originally published 19 May 20

President Donald Trump said on Monday that he's been taking hydroxychloroquine to protect himself against the coronavirus. It is a drug typically used to treat malaria and lupus.

The Federal Drug Administration issued warnings that the drug should only be used in clinical trials or for patients at a hospital under the Emergency Use Authorization.

"Yeah, a White House doctor, didn't recommend—I asked him what do you think—and he said well, if you'd like it and I said yeah, I'd like it, I'd like to take it," President Trump said, when a reporter asked him if a White House doctor recommended that he take hydroxychloroquine on Monday.

In a statement, the President's physician, Dr. Sean Conley said after discussions, they've concluded the potential benefit from treatment outweighed the relative risks. All this, despite the FDA warnings.

University of Minnesota bioethics professor Joel Wu said this is problematic.

"It's ethically problematic if the President is being treated for COVID specifically by hydroxychloroquine because our understanding based on the current evidence is not safe or effective in treating or preventing COVID," Wu said.

The info is here.

Monday, April 27, 2020

Experiments on Trial

Hannah Fry
The New Yorker
Originally posted 24 Feb 20

Here are two excerpts:

There are also times when manipulation leaves people feeling cheated. For instance, in 2018 the Wall Street Journal reported that Amazon had been inserting sponsored products in its consumers’ baby registries. “The ads look identical to the rest of the listed products in the registry, except for a small gray ‘Sponsored’ tag,” the Journal revealed. “Unsuspecting friends and family clicked on the ads and purchased the items,” assuming they’d been chosen by the expectant parents. Amazon’s explanation when confronted? “We’re constantly experimenting,” a spokesperson said. (The company has since ended the practice.)

But there are times when the experiments go further still, leaving some to question whether they should be allowed at all. There was a notorious experiment run by Facebook in 2012, in which the number of positive and negative posts in six hundred and eighty-nine thousand users’ news feeds was tweaked. The aim was to see how the unwitting participants would react. As it turned out, those who saw less negative content in their feeds went on to post more positive stuff themselves, while those who had positive posts hidden from their feeds used more negative words.

A public backlash followed; people were upset to discover that their emotions had been manipulated. Luca and Bazerman argue that this response was largely misguided. They point out that the effect was small. A person exposed to the negative news feed “ended up writing about four additional negative words out of every 10,000,” they note. Besides, they say, “advertisers and other groups manipulate consumers’ emotions all the time to suit their purposes. If you’ve ever read a Hallmark card, attended a football game or seen a commercial for the ASPCA, you’ve been exposed to the myriad ways in which products and services influence consumers’ emotions.”

(cut)

Medicine has already been through this. In the early twentieth century, without a set of ground rules on how people should be studied, medical experimentation was like the Wild West. Alongside a great deal of good work, a number of deeply unethical studies took place—including the horrifying experiments conducted by the Nazis and the appalling Tuskegee syphilis trial, in which hundreds of African-American men were denied treatment by scientists who wanted to see how the lethal disease developed. As a result, there are now clear rules about seeking informed consent whenever medical experiments use human subjects, and institutional procedures for reviewing the design of such experiments in advance. We’ve learned that researchers aren’t always best placed to assess the potential harm of their work.

The info is here.

Tuesday, February 4, 2020

Researchers: Are we on the cusp of an ‘AI winter’?

Sam Shead
bbc.com
Originally posted 12 Jan 20

Hype surrounding AI has peaked and troughed over the years as the abilities of the technology get overestimated and then re-evaluated.

The peaks are known as AI summers, and the troughs AI winters.

The 10s were arguably the hottest AI summer on record with tech giants repeatedly touting AI's abilities.

AI pioneer Yoshua Bengio, sometimes called one of the "godfathers of AI", told the BBC that AI's abilities were somewhat overhyped in the 10s by certain companies with an interest in doing so.

There are signs, however, that the hype might be about to start cooling off.

"I have the sense that AI is transitioning to a new phase," said Katja Hoffman, a principal researcher at Microsoft Research in Cambridge.

Given the billions being invested in AI and the fact that there are likely to be more breakthroughs ahead, some researchers believe it would be wrong to call this new phase an AI winter.

The info is here.

Thursday, January 9, 2020

How implicit bias harms patient care

Jeff Bendix
medicaleconomics.com
Originally posted 25 Nov 19

Here is an excerpt:

While many people have difficulty acknowledging that their actions are influenced by unconscious biases, the concept is particularly troubling for doctors, who have been trained to view—and treat—patients equally, and the vast majority of whom sincerely believe that they do.

“Doctors have been molded throughout medical school and all our training to be non-prejudiced when it comes to treating patients,” says James Allen, MD, a pulmonologist and medical director of University Hospital East, part of Ohio State University’s Wexner Medical Center. “It’s not only asked of us, it’s demanded of us, so many physicians would like to think they have no biases. But it’s not true. All human beings have biases.”

“Among physicians, there’s a stigma attached to any suggestion of racial bias,” adds Penner. “And were a person to be identified that way, there could be very severe consequences in terms of their career prospects or even maintaining their license.”

Ironically, as Penner and others point out, the conditions under which most doctors practice today—high levels of stress, frequent distractions, and brief visits that allow little time to get to know patients--are the ones most likely to heighten their vulnerability to unintentional biases.

“A doctor under time pressure from a backlog of overdue charting and whatever else they’re dealing with will have a harder time treating all patients with the same level of empathy and concern,” van Ryn says.

The info is here.

Monday, September 23, 2019

Ohio medical board knew late doctor was sexually assaulting his male patients, but did not remove his license, report says

Image result for richard strauss ohio state
Richard Strauss
Laura Ly
CNN.com
Originally posted August 30, 2019

Dr. Richard Strauss is believed to have sexually abused at least 177 students at Ohio State University when he worked there between 1978 and 1998. A new investigation has found that the State Medical Board of Ohio knew about the abuse by the late doctor but did nothing.

A new investigation by a working group established by Ohio Gov. Mike DeWine found that the state medical board investigated allegations of sexual misconduct against Strauss in 1996.

The board found credible evidence of sexual misconduct by Strauss and revealed that Strauss had been "performing inappropriate genital exams on male students for years," but no one with knowledge of the case worked to remove his medical license or notify law enforcement, DeWine announced at a press conference Friday.

The investigation revealed that an attorney with the medical board did intend to proceed with a case against Strauss, but for some reason never followed through. That attorney, as well as others involved with the 1996 investigation, are now deceased and cannot be questioned about their conduct, DeWine said.

"We'll likely never know exactly why the case was ultimately ignored by the medical board," DeWine said Friday.

The allegations against Strauss — who died by suicide in 2005 — emerged last year after former Ohio State athletes came forward to claim the doctor had sexually abused them under the guise of a medical examination.

The info is here.

Saturday, April 20, 2019

Cardiologist Eric Topol on How AI Can Bring Humanity Back to Medicine

Alice Park
Time.com
Originally published March 14, 2019

Here is an excerpt:

What are the best examples of how AI can work in medicine?

We’re seeing rapid uptake of algorithms that make radiologists more accurate. The other group already deriving benefit is ophthalmologists. Diabetic retinopathy, which is a terribly underdiagnosed cause of blindness and a complication of diabetes, is now diagnosed by a machine with an algorithm that is approved by the Food and Drug Administration. And we’re seeing it hit at the consumer level with a smart-watch app with a deep learning algorithm to detect atrial fibrillation.

Is that really artificial intelligence, in the sense that the machine has learned about medicine like doctors?

Artificial intelligence is different from human intelligence. It’s really about using machines with software and algorithms to ingest data and come up with the answer, whether that data is what someone says in speech, or reading patterns and classifying or triaging things.

What worries you the most about AI in medicine?

I have lots of worries. First, there’s the issue of privacy and security of the data. And I’m worried about whether the AI algorithms are always proved out with real patients. Finally, I’m worried about how AI might worsen some inequities. Algorithms are not biased, but the data we put into those algorithms, because they are chosen by humans, often are. But I don’t think these are insoluble problems.

The info is here.

Sunday, March 10, 2019

Rethinking Medical Ethics

Insights Team
Forbes.com
Originally posted February 11, 2019

Here is an excerpt:

In June 2018, the American Medical Association (AMA) issued its first guidelines for how to develop, use and regulate AI. (Notably, the association refers to AI as “augmented intelligence,” reflecting its belief that AI will enhance, not replace, the work of physicians.) Among its recommendations, the AMA says, AI tools should be designed to identify and address bias and avoid creating or exacerbating disparities in the treatment of vulnerable populations. Tools, it adds, should be transparent and protect patient privacy.

None of those recommendations will be easy to satisfy. Here is how medical practitioners, researchers, and medical ethicists are approaching some of the most pressing ethical challenges.

Avoiding Bias

In 2017, the data analytics team at University of Chicago Medicine (UCM) used AI to predict how long a patient might stay in the hospital. The goal was to identify patients who could be released early, freeing up hospital resources and providing relief for the patient. A case manager would then be assigned to help sort out insurance, make sure the patient had a ride home, and otherwise smooth the way for early discharge.

In testing the system, the team found that the most accurate predictor of a patient’s length of stay was his or her ZIP code. This immediately raised red flags for the team: ZIP codes, they knew, were strongly correlated with a patient’s race and socioeconomic status. Relying on them would disproportionately affect African-Americans from Chicago’s poorest neighborhoods, who tended to stay in the hospital longer. The team decided that using the algorithm to assign case managers would be biased and unethical.

The info is here.

Thursday, February 28, 2019

Should Watson Be Consulted for a Second Opinion?

David Luxton
AMA J Ethics. 2019;21(2):E131-137.
doi: 10.1001/amajethics.2019.131.

Abstract

This article discusses ethical responsibility and legal liability issues regarding use of IBM Watson™ for clinical decision making. In a case, a patient presents with symptoms of leukemia. Benefits and limitations of using Watson or other intelligent clinical decision-making tools are considered, along with precautions that should be taken before consulting artificially intelligent systems. Guidance for health care professionals and organizations using artificially intelligent tools to diagnose and to develop treatment recommendations are also offered.

Here is an excerpt:

Understanding Watson’s Limitations

There are precautions that should be taken into consideration before consulting Watson. First, it’s important for physicians such as Dr O to understand the technical challenges of accessing quality data that the system needs to analyze in order to derive recommendations. Idiosyncrasies in patient health care record systems is one culprit, causing missing or incomplete data. If some of the data that is available to Watson is inaccurate, then it could result in diagnosis and treatment recommendations that are flawed or at least inconsistent. An advantage of using a system such as Watson, however, is that it might be able to identify inconsistencies (such as those caused by human input error) that a human might otherwise overlook. Indeed, a primary benefit of systems such as Watson is that they can discover patterns that not even human experts might be aware of, and they can do so in an automated way. This automation has the potential to reduce uncertainty and improve patient outcomes.

Wednesday, February 20, 2019

Precision medicine’s rosy predictions haven’t come true. We need fewer promises and more debate

Michael Joyner and Nigel Paneth
STATnews.com
Originally published February 7, 2019

Here is an excerpt:

While we are occasionally told that we are Luddites or nihilists (generally without much debate of the merits of our position), the most frequent communications we receive have been along the lines of “I agree with you, but can’t speak up publicly for fear of losing my grants, alienating powerful people, or upsetting my dean.” This atmosphere cannot be good for the culture of science.

We are calling for an open debate, in all centers of biomedical research, about the best way forward, and about whether precision medicine is really the most promising avenue for progress. It is time for precision medicine supporters to engage in debate — to go beyond asserting the truism that all individuals are unique, and that the increase in the volume of health data and measurements combined with the decline in the cost of studying the genome constitute sufficient argument for the adoption of the precision medicine program.

Enthusiasts of precision medicine must stop evading the tough questions we raise. The two of us have learned enormously from the free and open exchange of ideas among our small band of dissenters, and we look forward to a vigorous debate engaging an ever-larger fraction of the scientific community.

The info is here.

Sunday, February 17, 2019

Physician burnout now essentially a public health crisis

Priyanka Dayal McCluskey
Boston Globe
Originally posted January 17, 2019

Physician burnout has reached alarming levels and now amounts to a public health crisis that threatens to undermine the doctor-patient relationship and the delivery of health care nationwide, according to a report from Massachusetts doctors to be released Thursday.

The report — from the Massachusetts Medical Society, the Massachusetts Health & Hospital Association, and the Harvard T.H. Chan School of Public Health — portrays a profession struggling with the unyielding demands of electronic health record systems and ever-growing regulatory burdens.

It urges hospitals and medical practices to take immediate action by putting senior executives in charge of physician well-being and by giving doctors better access to mental health services. The report also calls for significant changes to make health record systems more user-friendly.

While burnout has long been a worry in the profession, the report reflects a newer phenomenon — the draining documentation and data entry now required of doctors. Today’s electronic record systems are so complex that a simple task, such as ordering a prescription, can take many clicks.

The info is here.

Thursday, February 7, 2019

Google is quietly infiltrating medicine, but what rules will it play by?

Michael Millenson
STAT News
Originally posted January 3, 2019

Here is an excerpt:

Other tech companies are also making forays into fields previously reserved for physicians as they compete for a slice of the $3.5 trillion health care pie. Renowned surgeon and author Dr. Atul Gawande was hired to head the still-nascent health care joint venture between Amazon, Berkshire Hathaway, and JPMorgan. Apple recently hired more than 50 physicians to tend its growing health care portfolio. Those efforts include Apple Watch apps to detect irregular heart rhythms and falls, a medical record repository on your iPhone, a genetic risk score for heart disease, and a partnership with medical equipment manufacturer Zimmer Biomet aimed at improving knee and hip surgery.

Google is hiring physicians, too. Its high-profile hires include the former chief executives of the Geisinger Clinic and the Cleveland Clinic. The company’s ambitious health care expansion plans reportedly encompass everything from the management of Parkinson’s disease to selling hardware to providers and insurers.

To be clear, I’ve connected the dots among separate Google companies in a way Google might dispute. However, there are some concerns about how and whether any separation of information will be maintained. In November, Bloomberg reported that plans in the United Kingdom to combine an Alphabet subsidiary using artificial intelligence on medical records with the Google search engine were “tripping alarm bells about privacy.”

The info is here.

Monday, October 29, 2018

We hold people with power to account. Why not algorithms?

Hannah Fry
The Guardian
Originally published September 17, 2018

Here is an excerpt:

But already in our hospitals, our schools, our shops, our courtrooms and our police stations, artificial intelligence is silently working behind the scenes, feeding on our data and making decisions on our behalf. Sure, this technology has the capacity for enormous social good – it can help us diagnose breast cancer, catch serial killers, avoid plane crashes and, as the health secretary, Matt Hancock, has proposed, potentially save lives using NHS data and genomics. Unless we know when to trust our own instincts over the output of a piece of software, however, it also brings the potential for disruption, injustice and unfairness.

If we permit flawed machines to make life-changing decisions on our behalf – by allowing them to pinpoint a murder suspect, to diagnose a condition or take over the wheel of a car – we have to think carefully about what happens when things go wrong.

Back in 2012, a group of 16 Idaho residents with disabilities received some unexpected bad news. The Department of Health and Welfare had just invested in a “budget tool” – a swish piece of software, built by a private company, that automatically calculated their entitlement to state support. It had declared that their care budgets should be slashed by several thousand dollars each, a decision that would put them at serious risk of being institutionalised.

The problem was that the budget tool’s logic didn’t seem to make much sense. While this particular group of people had deep cuts to their allowance, others in a similar position actually had their benefits increased by the machine. As far as anyone could tell from the outside, the computer was essentially plucking numbers out of thin air.

The info is here.