Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Oversight. Show all posts
Showing posts with label Oversight. Show all posts

Wednesday, July 3, 2024

Fake therapist fooled hundreds online until she died, state records say

Brett Kelman
CBS Health Watch
Originally posted 2 July 24

Hundreds of Americans may have unknowingly received therapy from an untrained impostor who masqueraded as an online therapist, possibly for as long as two years, and the deception crumbled only when she died, according to state health department records.

Peggy A. Randolph, a social worker who was licensed in Florida and Tennessee and formerly worked for Brightside Health, a nationwide online therapy company, is accused of helping her wife impersonate her in online sessions, according to an investigation report from the Florida Department of Health.

The Florida report says the couple "defrauded" patients through a "coordinated effort": As Randolph treated patients in person, her wife pretended to be her in telehealth sessions with Brightside patients. The deceit was discovered after the wife died last year and a patient realized they'd been talking to the wrong person, according to a Tennessee Department of Health settlement agreement.

Records from both states identify Randolph's wife only by her initials, T.R., but her full name is in her obituary: Tammy G. Heath-Randolph. Therapists are generally expected to have at least a master's degree, but Randolph's wife was "not licensed or trained to provide any sort of counseling services," according to the Tennessee agreement.


Here are some thoughts:

This case of an impostor therapist masquerading as a licensed professional in online therapy sessions raises numerous ethical, healthcare, and psychotherapy concerns. The most obvious issues include the severe breach of trust between therapist and patient, the potential harm caused to vulnerable individuals seeking mental health support, and the serious violations of patient privacy.  The incident also highlights the critical importance of proper licensing and credentialing in healthcare, especially in telehealth settings.

This case also reveals less apparent but equally significant problems. It exposes potential vulnerabilities in telehealth systems, particularly in verifying the identity of online therapists, suggesting a need for more robust authentication methods.

The alleged involvement of the therapist's wife introduces complex ethical dilemmas regarding personal relationships in professional healthcare contexts. Furthermore, the fact that this deception went unnoticed for an extended period might indicate systemic issues such as therapist burnout or inadequate oversight in the mental health field. The case also demonstrates the challenges in regulating and monitoring telehealth services that operate across multiple states.

Interestingly, this real-life impostor scenario could potentially exacerbate feelings of imposter syndrome among both genuine therapists and patients. The posthumous discovery of the deception presents unique challenges in addressing the harm caused and seeking appropriate resolutions.

Lastly, the financial aspect of this case, where compensation was received for fraudulent sessions, raises important questions about the potential for monetary incentives to compromise ethical standards in healthcare. This incident underscores the urgent need for stronger safeguards in telehealth, improved oversight mechanisms, and a renewed focus on maintaining the integrity of the therapist-patient relationship in the evolving landscape of digital healthcare.

Thursday, March 14, 2024

A way forward for responsibility in the age of AI

Gogoshin, D.L.
Inquiry (2024)

Abstract

Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what are the goods attached to them? The debate concerning ‘machine morality’ is often hinged on whether artificial agents are or could ever be morally responsible, and it is generally taken for granted (following Matthias 2004) that if they cannot, they pose a threat to the moral responsibility system and associated goods. In this paper, I challenge this assumption by asking what the goods of this system, if any, are, and what happens to them in the face of artificially intelligent agents. I will argue that they neither introduce new problems for the moral responsibility system nor do they threaten what we really (ought to) care about. I conclude the paper with a proposal for how to secure this objective.


Here is my summary:

While AI may not possess true moral agency, it's crucial to consider how the development and use of AI can be made more responsible. The author challenges the assumption that AI's lack of moral responsibility inherently creates problems for our current system of ethics. Instead, they focus on the "goods" this system provides, such as deserving blame or praise, and how these can be upheld even with AI's presence. To achieve this, the author proposes several steps, including:
  1. Shifting the focus from AI's moral agency to the agency of those who design, build, and use it. This means holding these individuals accountable for the societal impacts of AI.
  2. Developing clear ethical guidelines for AI development and use. These guidelines should be comprehensive, addressing issues like fairness, transparency, and accountability.
  3. Creating robust oversight mechanisms. This could involve independent bodies that monitor AI development and use, and have the power to intervene when necessary.
  4. Promoting public understanding of AI. This will help people make informed decisions about how AI is used in their lives and hold developers and users accountable.

Wednesday, January 3, 2024

Doctors Wrestle With A.I. in Patient Care, Citing Lax Oversight

Christina Jewett
The New York Times
Originally posted 30 October 23

In medicine, the cautionary tales about the unintended effects of artificial intelligence are already legendary.

There was the program meant to predict when patients would develop sepsis, a deadly bloodstream infection, that triggered a litany of false alarms. Another, intended to improve follow-up care for the sickest patients, appeared to deepen troubling health disparities.

Wary of such flaws, physicians have kept A.I. working on the sidelines: assisting as a scribe, as a casual second opinion and as a back-office organizer. But the field has gained investment and momentum for uses in medicine and beyond.

Within the Food and Drug Administration, which plays a key role in approving new medical products, A.I. is a hot topic. It is helping to discover new drugs. It could pinpoint unexpected side effects. And it is even being discussed as an aid to staff who are overwhelmed with repetitive, rote tasks.

Yet in one crucial way, the F.D.A.’s role has been subject to sharp criticism: how carefully it vets and describes the programs it approves to help doctors detect everything from tumors to blood clots to collapsed lungs.

“We’re going to have a lot of choices. It’s exciting,” Dr. Jesse Ehrenfeld, president of the American Medical Association, a leading doctors’ lobbying group, said in an interview. “But if physicians are going to incorporate these things into their workflow, if they’re going to pay for them and if they’re going to use them — we’re going to have to have some confidence that these tools work.”


My summary: 

This article delves into the growing integration of artificial intelligence (A.I.) in patient care, exploring the challenges and concerns raised by doctors regarding the perceived lack of oversight. The medical community is increasingly leveraging A.I. technologies to aid in diagnostics, treatment planning, and patient management. However, physicians express apprehension about the potential risks associated with the use of these technologies, emphasizing the need for comprehensive oversight and regulatory frameworks to ensure patient safety and uphold ethical standards. The article highlights the ongoing debate within the medical profession on striking a balance between harnessing the benefits of A.I. and addressing the associated uncertainties and risks.

Thursday, May 16, 2019

Memorial Sloan Kettering Leaders Violated Conflict-of-Interest Rules, Report Finds

Charles Ornstein and Katie Thomas
ProPublica.org
Originally posted April 4, 2019

Top officials at Memorial Sloan Kettering Cancer Center repeatedly violated policies on financial conflicts of interest, fostering a culture in which profits appeared to take precedence over research and patient care, according to details released on Thursday from an outside review.

The findings followed months of turmoil over executives’ ties to drug and health care companies at one of the nation’s leading cancer centers. The review, conducted by the law firm Debevoise & Plimpton, was outlined at a staff meeting on Thursday morning. It concluded that officials frequently violated or skirted their own policies; that hospital leaders’ ties to companies were likely considered on an ad hoc basis rather than through rigorous vetting; and that researchers were often unaware that some senior executives had financial stakes in the outcomes of their studies.

In acknowledging flaws in its oversight of conflicts of interest, the cancer center announced on Thursday an extensive overhaul of policies governing employees’ relationships with outside companies and financial arrangements — including public disclosure of doctors’ ties to corporations and limits on outside work.

The info is here.

Wednesday, April 10, 2019

FDA Chief Scott Gottlieb Calls for Tighter Regulations on Electronic Health Records

Fred Schulte and Erika Fry
Fortune.com
Originally posted March 21, 2019

Food and Drug Administration Commissioner Scott Gottlieb on Wednesday called for tighter scrutiny of electronic health records systems, which have prompted thousands of reports of patient injuries and other safety problems over the past decade.

“What we really need is a much more tailored approach, so that we have appropriate oversight of EHRs when they’re doing things that could create risk for patients,” Gottlieb said in an interview with Kaiser Health News.

Gottlieb was responding to “Botched Operation,” a report published this week by KHN and Fortune. The investigation found that the federal government has spent more than $36 billion over the past 10 years to switch doctors and hospitals from paper to digital records systems. In that time, thousands of reports of deaths, injuries, and near misses linked to EHRs have piled up in databases—including at least one run by the FDA.

The info is here.

Thursday, February 21, 2019

Federal ethics agency refuses to certify financial disclosure from Commerce Secretary Wilbur Ross

Wilbur RossJeff Daniels
CNBC.com
Originally published February 19, 2019

The government's top ethics watchdog disclosed Tuesday that it had refused to certify a financial disclosure report from Commerce Secretary Wilbur Ross.

In a filing, the Office of Government Ethics said it wouldn't certify the 2018 annual filing by Ross because he didn't divest stock in a bank despite stating otherwise. The move could have legal ramifications for Ross and add to pressure for a federal probe.

"The report is not certified," OGE Director Emory Rounds said in a filing, explaining that a previous document the watchdog received from Ross indicated he "no longer held BankUnited stock." However, Rounds said an Oct. 31 document "demonstrates that he did" still hold the shares and as a result, "the filer was therefore not in compliance with his ethics agreement at the time of the report."

A federal ethics agreement required that Ross divest stock worth between $1,000 and $15,000 in BankUnited by the end of May 2017, or within 90 days of the Senate confirming him to the Commerce post. He previously reported selling the stock twice, first in May 2017 and again in August 2018 as part of an annual disclosure required by OGE.

The info is here.

Monday, February 18, 2019

Trump lawyers may have given false info about Cohen payments

Tal Axelrod
thehill.com
Originally posted February 15, 2019

Rep. Elijah Cummings (D-Md.), the chairman of the House Oversight and Reform Committee, said Friday the panel believes two attorneys for President Trump may have given false information to government ethics officials.

Cummings said the panel has reviewed newly uncovered documents from the Office of Government Ethics (OGE) suggesting Trump's personal lawyer Sheri Dillon and former White House lawyer Stefan Passantino gave false info about hush-money payments to adult-film actress Stormy Daniels and former Playboy model Karen McDougal.

“It now appears that President Trump’s other attorneys — at the White House and in private practice — may have provided false information about these payments to federal officials,” Cummings wrote in a letter to White House Counsel Pat Cipollone.

Cummings said Dillon “repeatedly stated to federal officials at OGE that President Trump never owed any money to Mr. Cohen in 2016 and 2017” and Passantino falsely told officials that Trump and his former lawyer Michael Cohen had a “retainer agreement.”

The info is here.

Sunday, December 2, 2018

Re-thinking Data Protection Law in the Age of Big Data and AI

Sandra Wachter and Brent Mittelstadt
Oxford Internet Institute
Originally posted October 11,2 018

Numerous applications of ‘Big Data analytics’ drawing potentially troubling inferences about individuals and groups have emerged in recent years.  Major internet platforms are behind many of the highest profile examples: Facebook may be able to infer protected attributes such as sexual orientation, race, as well as political opinions and imminent suicide attempts, while third parties have used Facebook data to decide on the eligibility for loans and infer political stances on abortion. Susceptibility to depression can similarly be inferred via usage data from Facebook and Twitter. Google has attempted to predict flu outbreaks as well as other diseases and their outcomes. Microsoft can likewise predict Parkinson’s disease and Alzheimer’s disease from search engine interactions. Other recent invasive applications include prediction of pregnancy by Target, assessment of users’ satisfaction based on mouse tracking, and China’s far reaching Social Credit Scoring system.

Inferences in the form of assumptions or predictions about future behaviour are often privacy-invasive, sometimes counterintuitive and, in any case, cannot be verified at the time of decision-making. While we are often unable to predict, understand or refute these inferences, they nonetheless impact on our private lives, identity, reputation, and self-determination.

These facts suggest that the greatest risks of Big Data analytics do not stem solely from how input data (name, age, email address) is used. Rather, it is the inferences that are drawn about us from the collected data, which determine how we, as data subjects, are being viewed and evaluated by third parties, that pose the greatest risk. It follows that protections designed to provide oversight and control over how data is collected and processed are not enough; rather, individuals require meaningful protection against not only the inputs, but the outputs of data processing.

Unfortunately, European data protection law and jurisprudence currently fails in this regard.

The info is here.

Monday, November 6, 2017

Is It Too Late For Big Data Ethics?

Kalev Leetaru
Forbes.com
Originally published October 16, 2017

Here is an excerpt:

AI researchers are rushing to create the first glimmers of general AI and hoping for the key breakthroughs that take us towards a world in which machines gain consciousness. The structure of academic IRBs means that little of this work is subject to ethical review of any kind and its highly technical nature means the general public is little aware of the rapid pace of progress until it comes into direct life-or-death contact with consumers such as driverless cars.

Could industry-backed initiatives like one announced by Bloomberg last month in partnership with BrightHive and Data for Democracy be the answer? It all depends on whether companies and organizations actively infuse these values into the work they perform and sponsor or whether these are merely public relations campaigns for them. As I wrote last month, when I asked the organizers of a recent data mining workshop as to why they did not require ethical review or replication datasets for their submissions, one of the organizers, a Bloomberg data scientist, responded only that the majority of other ACM computer science conferences don’t either. When asked why she and her co-organizers didn’t take a stand with their own workshop to require IRB review and replication datasets even if those other conferences did not, in an attempt to start a trend in the field, she would only repeat that such requirements are not common to their field. When asked whether Bloomberg would be requiring its own data scientists to adhere to its new data ethics initiative and/or mandate that they integrate its principles into external academic workshops they help organize, a company spokesperson said they would try to offer comment, but had nothing further to add after nearly a week.

The article is here.

Sunday, August 6, 2017

An erosion of ethics oversight should make us all more cynical about Trump

The Editorial Board
The Los Angeles Times
Originally published August 4, 2017

President Trump’s problems with ethics are manifest, from his refusal to make public his tax returns to the conflicts posed by his continued stake in the Trump Organization and its properties around the world — including the Trump International Hotel just down the street from the White House, in a building leased from the federal government he’s now in charge of. The president’s stubborn refusal to hew to the ethical norms set by his predecessors has left the nation to rightfully question whose best interests are foremost in his mind.

Some of the more persistent challenges to the Trump administration’s comportment have come from the Office of Government Ethics, whose recently departed director, Walter M. Shaub Jr., fought with the administration frequently over federal conflict-of-interest regulations. Under agency rules, chief of staff Shelley K. Finlayson should have been Shaub’s successor until the president nominated a new director, who would need Senate confirmation.

But Trump upended that transition last month by naming the office’s general counsel, David J. Apol, as the interim director. Apol has a reputation within the agency for taking contrarian — and usually more lenient — stances on ethics requirements than did Shaub and the consensus opinion of the staff (including Finlayson). And that, of course, raises the question of whether the White House replaced Finlayson with Apol in hopes of having a more conciliatory ethics chief without enduring a grueling nomination fight.

The article is here.

Saturday, April 22, 2017

As Trump Inquiries Flood Ethics Office, Director Looks To House For Action

By Marilyn Geewax and Peter Overby
npr.org
Originally published April 17, 2017

Office of Government Ethics Director Walter Shaub Jr. is calling on the chairman of House Oversight Committee to become more engaged in overseeing ethics questions in the Trump administration.

In an interview with NPR on Monday, Shaub said public inquiries and complaints involving Trump administration conflicts of interest and ethics have been inundating his tiny agency, which has only advisory power.

"We've even had a couple days where the volume was so huge it filled up the voicemail box, and we couldn't clear the calls as fast as they were coming in," Shaub said. His office is scrambling to keep pace with the workload.

But while citizens, journalists and Democratic lawmakers are pushing for investigations, Shaub suggested a similar level of energy is not coming from the House Oversight Committee, which has the power to investigate ethics questions, particularly those being raised now about reported secret ethics waivers for former lobbyists serving in the Trump administration.

The article is here.

Sunday, March 12, 2017

Ethics Watchdogs Want U.S. Attorney To Investigate Trump's Business Interests

Jim Zarolli
NPR.org
Originally published March 8, 2017

With Congress showing no signs of taking action, a group of ethics watchdogs is turning to U.S. Attorney Preet Bharara to look into whether President Trump's many business interests violate the Emoluments Clause of the U.S. Constitution.

"Published reports indicate that the Trump Organization and related Trump business entities have been receiving payments from foreign government sources which benefit President Trump through his ownership of the Trump Organization and related business entities," according to a letter sent to Bharara.

(cut)

The Emoluments Clause says that "no Person holding any Office of Profit or Trust under [the U.S. government], shall, without the Consent of the Congress, accept of any present, Emolument, Office, or Title, of any kind whatever, from any King, Prince, or foreign State."

The letter says "there is no question" the clause applies to Trump and that he is violating it, because of the Trump Organization's extensive business operations, many of them tied to foreign governments.

The article is here.

Thursday, June 30, 2016

When regulators close a 'pill mill,' patients sometimes turn to heroin

By Rich Lord
Pittsburgh Post-Gazette
Originally published May 25, 2016

Here is an excerpt:

In late 2013, Maryland launched its prescription drug monitoring program, allowing — but not requiring — doctors to access a database to see the drug histories of their patients. Nearly every state has such a system, designed to thwart people who seek drugs from multiple doctors. Some state medical boards use the data to flag physicians whose prescribing goes out of bounds.

Maryland’s board, though, can’t tap into the data “without going through major legal hoops,” Dr. Singh said. Physician groups, he said, have opposed efforts to ease access, because they fear “over-policing.”

Maryland has not adopted official opioid prescribing guidelines, as some states have.

The article is here.

Wednesday, October 7, 2015

What the FDA’s approval of “pink Viagra” tells us about the problems with drug regulation

by Julia Belluz
The Vox
Originally published on September 18, 2015

Here is an excerpt:

The episode raised hard questions about the changes wrought by the patient movement and other reforms that have followed. There were excellent reasons for the FDA to bring HIV-positive patients into its deliberations in the 1980s — they provided a crucial perspective that the agency's in-house scientists and officials lacked. But these days, some critics argue that those listening sessions have been hijacked by drug companies. As I found in my reporting, the patients who had lobbied the FDA to approve pink Viagra were often sponsored by the drug's manufacturer.

"The role of pharma in patient groups in the contemporary era is entirely fraught," says Yale Law School's Gregg Gonsalves, who was once one of those HIV activists in the 1980s. "[Drug companies] learned from the early days of the AIDS epidemic that the patient community could be useful allies, and they've poured money into patient groups here in the US and around the world."

So is the FDA approving drugs too easily? Has the push for speed and efficiency now undermined the agency's ability to protect public health? To find out, I took a closer look at the approval of "pink Viagra," which offers a vivid illustration of just how much the FDA has transformed over time — and why those changes worry many experts.

The entire article is here.

Sunday, May 19, 2013

Dangers found in lack of safety oversight for Medicare drug benefit

By Tracy Weber, Charles Ornstein and Jennifer LaFleur
ProPublica
Originally published: May 11, 2013

Here is an excerpt:

But an investigation by ProPublica has found the program, in its drive to get drugs into patients’ hands, has failed to properly monitor safety. An analysis of four years of Medicare prescription records shows that some doctors and other health professionals across the country prescribe large quantities of drugs that are potentially harmful, disorienting or addictive for their patients. Federal officials have done little to detect or deter these hazardous prescribing patterns.

Searches through hundreds of millions of records turned up physicians such as the Miami psychiatrist who has given hundreds of elderly dementia patients the same antipsychotic, despite the government’s most serious “black box” warning that it increases the risk of death. He believes he has no other options.

Some doctors are using drugs in unapproved ways that may be unsafe or ineffective, records showed. An Oklahoma psychiatrist regularly prescribes the Alzheimer’s drug Namenda for autism patients as young as 12; he says he thinks it calms them. Autism experts said there is scant scientific support for this practice.

The entire article is here.