Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Ethics. Show all posts
Showing posts with label Ethics. Show all posts

Saturday, October 21, 2023

Should Trackable Pill Technologies Be Used to Facilitate Adherence Among Patients Without Insight?

Tahir Rahman
AMA J Ethics. 2019;21(4):E332-336.
doi: 10.1001/amajethics.2019.332.

Abstract

Aripiprazole tablets with sensor offer a new wireless trackable form of aripiprazole that represents a clear departure from existing drug delivery systems, routes, or formulations. This tracking technology raises concerns about the ethical treatment of patients with psychosis when it could introduce unintended treatment challenges. The use of “trackable” pills and other “smart” drugs or nanodrugs assumes renewed importance given that physicians are responsible for determining patients’ decision-making capacity. Psychiatrists are uniquely positioned in society to advocate on behalf of vulnerable patients with mental health disorders. The case presented here focuses on guidance for capacity determination and informed consent for such nanodrugs.

(cut)

Ethics and Nanodrug Prescribing

Clinicians often struggle with improving treatment adherence in patients with psychosis who lack insight and decision-making capacity, so trackable nanodrugs, even though not proven to improve compliance, are worth considering. At the same time, guidelines are lacking to help clinicians determine which patients are appropriate for trackable nanodrug prescribing. The introduction of an actual tracking device in a patient who suffers from delusions of an imagined tracking device, like Mr A, raises specific ethical concerns. Clinicians have widely accepted the premise that confronting delusions is countertherapeuti The introduction of trackable pill technology could similarly introduce unintended harms. Paul Appelbaum has argued that “with paranoid patients often worried about being monitored or tracked, giving them a pill that does exactly that is an odd approach to treatment. The fear of invasion of privacy might discourage some patients from being compliant with their medical care and thus foster distrust of all psychiatric services. A good therapeutic relationship (often with family, friends, or a guardian involved) is critical to the patient’s engaging in ongoing psychiatric services.

The use of trackable pill technology to improve compliance deserves further scrutiny, as continued reliance on informal, physician determinations of decision-making capacity remain a standard practice. Most patients are not yet accustomed to the idea of ingesting a trackable pill. Therefore, explanation of the intervention must be incorporated into the informed consent process, assuming the patient has decision-making capacity. Since patients may have concerns about the collected data being stored on a device, clinicians might have to answer questions regarding potential breaches of confidentiality. They will also have to contend with clinical implications of acquiring patient treatment compliance data and justifying decisions based on such information. Below is a practical guide to aid clinicians in appropriate use of this technology.

Friday, October 20, 2023

Competition and moral behavior: A meta-analysis of forty-five crowd-sourced experimental designs

Huber, C., Dreber, A., et al. (2023).
PNAS of the United States of America, 120(23).

Abstract

Does competition affect moral behavior? This fundamental question has been debated among leading scholars for centuries, and more recently, it has been tested in experimental studies yielding a body of rather inconclusive empirical evidence. A potential source of ambivalent empirical results on the same hypothesis is design heterogeneity—variation in true effect sizes across various reasonable experimental research protocols. To provide further evidence on whether competition affects moral behavior and to examine whether the generalizability of a single experimental study is jeopardized by design heterogeneity, we invited independent research teams to contribute experimental designs to a crowd-sourced project. In a large-scale online data collection, 18,123 experimental participants were randomly allocated to 45 randomly selected experimental designs out of 95 submitted designs. We find a small adverse effect of competition on moral behavior in a meta-analysis of the pooled data. The crowd-sourced design of our study allows for a clean identification and estimation of the variation in effect sizes above and beyond what could be expected due to sampling variance. We find substantial design heterogeneity—estimated to be about 1.6 times as large as the average standard error of effect size estimates of the 45 research designs—indicating that the informativeness and generalizability of results based on a single experimental design are limited. Drawing strong conclusions about the underlying hypotheses in the presence of substantive design heterogeneity requires moving toward much larger data collections on various experimental designs testing the same hypothesis.

Significance

Using experiments involves leeway in choosing one out of many possible experimental designs. This choice constitutes a source of uncertainty in estimating the underlying effect size which is not incorporated into common research practices. This study presents the results of a crowd-sourced project in which 45 independent teams implemented research designs to address the same research question: Does competition affect moral behavior? We find a small adverse effect of competition on moral behavior in a meta-analysis involving 18,123 experimental participants. Importantly, however, the variation in effect size estimates across the 45 designs is substantially larger than the variation expected due to sampling errors. This “design heterogeneity” highlights that the generalizability and informativeness of individual experimental designs are limited.

Here are some of the key takeaways from the research:
  • Competition can have a small, but significant, negative effect on moral behavior.
  • This effect is likely due to the fact that competition can lead to people being more self-interested and less concerned about the well-being of others.
  • The findings of this research have important implications for our understanding of how competition affects moral behavior.

Thursday, October 19, 2023

10 Things Your Corporate Culture Needs to Get Right

D. Sull and C. Sull
MIT Sloan Management Review
Originally posted 16 September 21

Here are two excerpts:

What distinguishes a good corporate culture from a bad one in the eyes of employees? This is a trickier question than it might appear at first glance. Most leaders agree in principle that culture matters but have widely divergent views about which elements of culture are most important. In an earlier study, we identified more than 60 distinct values that companies listed among their official “core values.” Most often, an organization’s official core values signal top executives’ cultural aspirations, rather than reflecting the elements of corporate culture that matter most to employees.

Which elements of corporate life shape how employees rate culture? To address this question, we analyzed the language workers used to describe their employers. When they complete a Glassdoor review, employees not only rate corporate culture on a 5-point scale, but also describe — in their own words — the pros and cons of working at their organization. The topics they choose to write about reveal which factors are most salient to them, and sentiment analysis reveals how positively (or negatively) they feel about each topic. (Glassdoor reviews are remarkably balanced between positive and negative observations.) By analyzing the relationship between their descriptions and rating of culture, we can start to understand what employees are talking about when they talk about culture.

(cut)

The following chart summarizes the factors that best predict whether employees love (or loathe) their companies. The bars represent each topic’s relative importance in predicting a company’s culture rating. Whether employees feel respected, for example, is 18 times more powerful as a predictor of a company’s culture rating compared with the average topic. We’ve grouped related factors to tease out broader themes that emerge from our analysis.

Here are the 10 cultural dynamics and my take
  1. Employees feel respected. Employees want to be treated with consideration, courtesy, and dignity. They want their perspectives to be taken seriously and their contributions to be valued.
  2. Employees have supportive leaders. Employees need leaders who will help them to do their best work, respond to their requests, accommodate their individual needs, offer encouragement, and have their backs.
  3. Leaders live core values. Employees need to see that their leaders are committed to the company's core values and that they are willing to walk the talk.
  4. Toxic managers. Toxic managers can create a poisonous work environment and lead to high turnover rates and low productivity.
  5. Unethical behavior. Employees need to have confidence that their colleagues and leaders are acting ethically and honestly.
  6. Employees have good benefits. Employees expect to be compensated fairly and to have access to a good benefits package.
  7. Perks. Perks can be anything from free snacks to on-site childcare to flexible work arrangements. They can help to make the workplace more enjoyable and improve employee morale.
  8. Employees have opportunities for learning and development. Employees want to grow and develop in their careers. They need to have access to training and development opportunities that will help them to reach their full potential.
  9. Job security. Employees need to feel secure in their jobs in order to focus on their work and be productive.
  10. Reorganizations. How employees view reorganizations, including frequency and quality.
The authors argue that these ten elements are essential for creating a corporate culture that is attractive to top talent, drives innovation and productivity, and leads to long-term success.

Additional thoughts

In addition to the ten elements listed above, there are a number of other factors that can contribute to a strong and positive corporate culture. These include:
  • Diversity and inclusion. Employees want to work in a company where they feel respected and valued, regardless of their race, ethnicity, gender, sexual orientation, or other factors.
  • Collaboration and teamwork. Employees want to work in a company where they can collaborate with others and achieve common goals.
  • Open communication and feedback. Employees need to feel comfortable communicating with their managers and colleagues, and they need to be open to receiving feedback.
  • Celebration of success. It is important to celebrate successes and recognize employees for their contributions. This helps to create a positive and supportive work environment.
  • By investing in these factors, companies can create a corporate culture that is both attractive to employees and beneficial to the bottom line.

Wednesday, October 18, 2023

Responsible Agency and the Importance of Moral Audience

Jefferson, A., & Sifferd, K. 
Ethic Theory Moral Prac 26, 361–375 (2023).

Abstract

Ecological accounts of responsible agency claim that moral feedback is essential to the reasons-responsiveness of agents. In this paper, we discuss McGeer’s scaffolded reasons-responsiveness account in the light of two concerns. The first is that some agents may be less attuned to feedback from their social environment but are nevertheless morally responsible agents – for example, autistic people. The second is that moral audiences can actually work to undermine reasons-responsiveness if they espouse the wrong values. We argue that McGeer’s account can be modified to handle both problems. Once we understand the specific roles that moral feedback plays for recognizing and acting on moral reasons, we can see that autistics frequently do rely on such feedback, although it often needs to be more explicit. Furthermore, although McGeer is correct to highlight the importance of moral feedback, audience sensitivity is not all that matters to reasons-responsiveness; it needs to be tempered by a consistent application of moral rules. Agents also need to make sure that they choose their moral audiences carefully, paying special attention to receiving feedback from audiences which may be adversely affected by their actions.


Here is my take:

Responsible agency is the ability to act on the right moral reasons, even when it is difficult or costly. Moral audience is the group of people whose moral opinions we care about and respect.

According to the authors, moral audience plays a crucial role in responsible agency in two ways:
  1. It helps us to identify and internalize the right moral reasons. We learn about morality from our moral audience, and we are more likely to act on moral reasons if we know that our audience would approve of our actions.
  2. It provides us with motivation to act on moral reasons. We are more likely to do the right thing if we know that our moral audience will be disappointed in us if we don't.
The authors argue that moral audience is particularly important for responsible agency in novel contexts, where we may not have clear guidance from existing moral rules or norms. In these situations, we need to rely on our moral audience to help us to identify and act on the right moral reasons.

The authors also discuss some of the challenges that can arise when we are trying to identify and act on the right moral reasons. For example, our moral audience may have different moral views than we do, or they may be biased in some way. In these cases, we need to be able to critically evaluate our moral audience's views and make our own judgments about what is right and wrong.

Overall, the article makes a strong case for the importance of moral audience in developing and maintaining responsible agency. It is important to have a group of people whose moral opinions we care about and respect, and to be open to their feedback. This can help us to become more morally responsible agents.

Monday, October 9, 2023

They Studied Dishonesty. Was Their Work a Lie?

Gideon Lewis-Kraus
The New Yorker
Originally published 30 Sept 23

Here is an excerpt:

Despite a good deal of readily available evidence to the contrary, neoclassical economics took it for granted that humans were rational. Kahneman and Tversky found flaws in this assumption, and built a compendium of our cognitive biases. We rely disproportionately on information that is easily retrieved: a recent news article about a shark attack seems much more relevant than statistics about how rarely such attacks actually occur. Our desires are in flux—we might prefer pizza to hamburgers, and hamburgers to nachos, but nachos to pizza. We are easily led astray by irrelevant details. In one experiment, Kahneman and Tversky described a young woman who had studied philosophy and participated in anti-nuclear demonstrations, then asked a group of participants which inference was more probable: either “Linda is a bank teller” or “Linda is a bank teller and is active in the feminist movement.” More than eighty per cent chose the latter, even though it is a subset of the former. We weren’t Homo economicus; we were giddy and impatient, our thoughts hasty, our actions improvised. Economics tottered.

Behavioral economics emerged for public consumption a generation later, around the time of Ariely’s first book. Where Kahneman and Tversky held that we unconsciously trick ourselves into doing the wrong thing, behavioral economists argued that we might, by the same token, be tricked into doing the right thing. In 2008, Richard Thaler and Cass Sunstein published “Nudge,” which argued for what they called “libertarian paternalism”—the idea that small, benign alterations of our environment might lead to better outcomes. When employees were automatically enrolled in 401(k) programs, twice as many saved for retirement. This simple bureaucratic rearrangement improved a great many lives.

Thaler and Sunstein hoped that libertarian paternalism might offer “a real Third Way—one that can break through some of the least tractable debates in contemporary democracies.” Barack Obama, who hovered above base partisanship, found much to admire in the promise of technocratic tinkering. He restricted his outfit choices mostly to gray or navy suits, based on research into “ego depletion,” or the concept that one might exhaust a given day’s reservoir of decision-making energy. When, in the wake of the 2008 financial crisis, Obama was told that money “framed” as income was more likely to be spent than money framed as wealth, he enacted monthly tax deductions instead of sending out lump-sum stimulus checks. He eventually created a behavioral-sciences team in the White House. (Ariely had once found that our decisions in a restaurant are influenced by whoever orders first; it’s possible that Obama was driven by the fact that David Cameron, in the U.K., was already leaning on a “nudge unit.”)

The nudge, at its best, was modest—even a minor potential benefit at no cost pencilled out. In the Obama years, a pop-up on computers at the Department of Agriculture reminded employees that single-sided printing was a waste, and that advice reduced paper use by six per cent. But as these ideas began to intermingle with those in the adjacent field of social psychology, the reasonable notion that some small changes could have large effects at scale gave way to a vision of individual human beings as almost boundlessly pliable. Even Kahneman was convinced. He told me, “People invented things that shouldn’t have worked, and they were working, and I was enormously impressed by it.” Some of these interventions could be implemented from above. 


Saturday, October 7, 2023

AI systems must not confuse users about their sentience or moral status

Schwitzgebel, E. (2023).
Patterns, 4(8), 100818.
https://doi.org/10.1016/j.patter.2023.100818 

The bigger picture

The draft European Union Artificial Intelligence Act highlights the seriousness with which policymakers and the public have begun to take issues in the ethics of artificial intelligence (AI). Scientists and engineers have been developing increasingly more sophisticated AI systems, with recent breakthroughs especially in large language models such as ChatGPT. Some scientists and engineers argue, or at least hope, that we are on the cusp of creating genuinely sentient AI systems, that is, systems capable of feeling genuine pain and pleasure. Ordinary users are increasingly growing attached to AI companions and might soon do so in much greater numbers. Before long, substantial numbers of people might come to regard some AI systems as deserving of at least some limited rights or moral standing, being targets of ethical concern for their own sake. Given high uncertainty both about the conditions under which an entity can be sentient and about the proper grounds of moral standing, we should expect to enter a period of dispute and confusion about the moral status of our most advanced and socially attractive machines.

Summary

One relatively neglected challenge in ethical artificial intelligence (AI) design is ensuring that AI systems invite a degree of emotional and moral concern appropriate to their moral standing. Although experts generally agree that current AI chatbots are not sentient to any meaningful degree, these systems can already provoke substantial attachment and sometimes intense emotional responses in users. Furthermore, rapid advances in AI technology could soon create AIs of plausibly debatable sentience and moral standing, at least by some relevant definitions. Morally confusing AI systems create unfortunate ethical dilemmas for the owners and users of those systems, since it is unclear how those systems ethically should be treated. I argue here that, to the extent possible, we should avoid creating AI systems whose sentience or moral standing is unclear and that AI systems should be designed so as to invite appropriate emotional responses in ordinary users.

My take

The article proposes two design policies for avoiding morally confusing AI systems. The first is to create systems that are clearly non-conscious artifacts. This means that the systems should be designed in a way that makes it clear to users that they are not sentient beings. The second policy is to create systems that are clearly deserving of moral consideration as sentient beings. This means that the systems should be designed to have the same moral status as humans or other animals.

The article concludes that the best way to avoid morally confusing AI systems is to err on the side of caution and create systems that are clearly non-conscious artifacts. This is because it is less risky to underestimate the sentience of an AI system than to overestimate it.

Here are some additional points from the article:
  • The scientific study of sentience is highly contentious, and there is no agreed-upon definition of what it means for an entity to be sentient.
  • Rapid advances in AI technology could soon create AI systems that are plausibly debatable as sentient.
  • Morally confusing AI systems create unfortunate ethical dilemmas for the owners and users of those systems, since it is unclear how those systems ethically should be treated.
  • The design of AI systems should be guided by ethical considerations, such as the need to avoid causing harm and the need to respect the dignity of all beings.

Sunday, September 24, 2023

Consent GPT: Is It Ethical to Delegate Procedural Consent to Conversational AI?

Allen, J., Earp, B., Koplin, J. J., & Wilkinson, D.

Abstract

Obtaining informed consent from patients prior to a medical or surgical procedure is a fundamental part of safe and ethical clinical practice. Currently, it is routine for a significant part of the consent process to be delegated to members of the clinical team not performing the procedure (e.g. junior doctors). However, it is common for consent-taking delegates to lack sufficient time and clinical knowledge to adequately promote patient autonomy and informed decision-making. Such problems might be addressed in a number of ways. One possible solution to this clinical dilemma is through the use of conversational artificial intelligence (AI) using large language models (LLMs). There is considerable interest in the potential benefits of such models in medicine. For delegated procedural consent, LLM could improve patients’ access to the relevant procedural information and therefore enhance informed decision-making.

In this paper, we first outline a hypothetical example of delegation of consent to LLMs prior to surgery. We then discuss existing clinical guidelines for consent delegation and some of the ways in which current practice may fail to meet the ethical purposes of informed consent. We outline and discuss the ethical implications of delegating consent to LLMs in medicine concluding that at least in certain clinical situations, the benefits of LLMs potentially far outweigh those of current practices.

-------------

Here are some additional points from the article:
  • The authors argue that the current system of delegating procedural consent to human consent-takers is not always effective, as consent-takers may lack sufficient time or clinical knowledge to adequately promote patient autonomy and informed decision-making.
  • They suggest that LLMs could be used to provide patients with more comprehensive and accurate information about procedures, and to answer patients' questions in a way that is tailored to their individual needs.
  • However, the authors also acknowledge that there are a number of ethical concerns that need to be addressed before LLMs can be used for procedural consent. These include concerns about bias, accuracy, and patient trust.

Friday, September 22, 2023

Police are Getting DNA Data from People who Think They Opted Out

Jordan Smith
The Intercept
Originally posted 18 Aug 23

Here is an excerpt:

The communications are a disturbing example of how genetic genealogists and their law enforcement partners, in their zeal to close criminal cases, skirt privacy rules put in place by DNA database companies to protect their customers. How common these practices are remains unknown, in part because police and prosecutors have fought to keep details of genetic investigations from being turned over to criminal defendants. As commercial DNA databases grow, and the use of forensic genetic genealogy as a crime-fighting tool expands, experts say the genetic privacy of millions of Americans is in jeopardy.

Moore did not respond to The Intercept’s requests for comment.

To Tiffany Roy, a DNA expert and lawyer, the fact that genetic genealogists have accessed private profiles — while simultaneously preaching about ethics — is troubling. “If we can’t trust these practitioners, we certainly cannot trust law enforcement,” she said. “These investigations have serious consequences; they involve people who have never been suspected of a crime.” At the very least, law enforcement actors should have a warrant to conduct a genetic genealogy search, she said. “Anything less is a serious violation of privacy.”

(cut)

Exploitation of the GEDmatch loophole isn’t the only example of genetic genealogists and their law enforcement partners playing fast and loose with the rules.

Law enforcement officers have used genetic genealogy to solve crimes that aren’t eligible for genetic investigation per company terms of service and Justice Department guidelines, which say the practice should be reserved for violent crimes like rape and murder only when all other “reasonable” avenues of investigation have failed. In May, CNN reported on a U.S. marshal who used genetic genealogy to solve a decades-old prison break in Nebraska. There is no prison break exception to the eligibility rules, Larkin noted in a post on her website. “This case should never have used forensic genetic genealogy in the first place.”

A month later, Larkin wrote about another violation, this time in a California case. The FBI and the Riverside County Regional Cold Case Homicide Team had identified the victim of a 1996 homicide using the MyHeritage database — an explicit violation of the company’s terms of service, which make clear that using the database for law enforcement purposes is “strictly prohibited” absent a court order.

“The case presents an example of ‘noble cause bias,’” Larkin wrote, “in which the investigators seem to feel that their objective is so worthy that they can break the rules in place to protect others.”


My take:

Forensic genetic genealogists have been skirting GEDmatch privacy rules by searching users who explicitly opted out of sharing DNA with law enforcement. This means that police can access the DNA of people who thought they were protecting their privacy by opting out of law enforcement searches.

The practice of forensic genetic genealogy has been used to solve a number of cold cases, but it has also raised concerns about privacy and civil liberties. Some people worry that the police could use DNA data to target innocent people or to build a genetic database of the entire population.

GEDmatch has since changed its privacy policy to make it more difficult for police to access DNA data from users who have opted out. However, the damage may already be done. Police have already used GEDmatch data to solve dozens of cases, and it is unclear how many people have had their DNA data accessed without their knowledge or consent.

Wednesday, September 20, 2023

Worried about AI in the workplace? You’re not alone

Michele Lerner
American Psychological Association
Originally posted 7 September 23

Here is an excerpt:

“Advances in AI are happening rapidly in the workplace, and many of their effects are uncertain,” says Fred Oswald, PhD, a professor in the department of psychological sciences at Rice University in Houston. “Will AI empower employees and organizations to be more effective? Or consistent with employee worries, will AI replace their jobs? We’re likely to see both. We’ll need more research to inform targeted AI-oriented investments in employee training, career development, mental health, and other interventions.”

We asked Oswald and Leslie Hammer, PhD, emerita professor of psychology at Portland State University and codirector of the Oregon Healthy Workforce Center at the Oregon Health and Science University, to outline ways employers and employees can address the psychological impact of AI in the workplace.

The survey shows 46% of workers worried about AI making some or all of their job duties obsolete intend to look for another job compared with 25% of workers who are not worried about AI. How seriously should employers take workers’ concerns?
Oswald: Both real and perceived job insecurities often motivate employees to look for other jobs. In general, managers should always attempt to maintain healthy communication with their employees, where in this case it would be to understand and address the root cause of AI-related worries. Communication helps overall to ensure the well-being of individual employees and improves the culture and morale of the organization, and this might be more important when AI becomes present in the workplace.
Survey data show worried workers also feel they do not matter in their workplaces, and that they feel micromanaged. Mattering at work is among the five components of a healthy workplace identified by the U.S. Surgeon General. What can employers do to ensure workers feel they matter and to help workers feel more comfortable about AI, given that changes are likely inevitable?
Hammer: It’s very important that workplaces communicate information regarding any changes related to AI clearly and honestly. Fear of the unknown and loss of a sense of control are directly related to psychological distress, occupational stress, and strain, as well as negative physical health outcomes. Providing information about the use of AI and allowing employee input into such changes will significantly alleviate these outcomes.

The info is here. 

Monday, September 11, 2023

Kaiser agrees to $49-million settlement for illegal disposal of hazardous waste, protected patient information

Gabriel San Roman
Los Angeles Times
Originally posted 9 September 23

Here are two excepts:

“The illegal disposal of hazardous and medical waste puts the environment, workers and the public at risk,” Bonta said. “It also violates numerous federal and state laws. As a healthcare provider, Kaiser should know that it has specific legal obligations to properly dispose of medical waste and safeguard patients’ medical information.”

The state attorney general partnered with six district attorney offices — including Alameda, San Bernardino, San Francisco, San Joaquin, San Mateo and Yolo counties — in the undercover probe of 16 Kaiser facilities statewide that first began in 2015.

Investigators found hundreds of hazardous and medical waste items such as syringes, tubing with body fluid and aerosol cans destined for public landfills. The inspections also uncovered more than 10,000 pages of confidential patient files.

During a news conference on Friday, Bonta said that investigators also found body parts in the public waste stream but did not elaborate.

(cut)

As part of the settlement agreement, the healthcare provider must retain an independent third-party auditor approved by the state and local law enforcement involved in the investigation.

Kaiser faces a $1.75-million penalty if adequate steps are not taken within a five-year period.

“As a major corporation in Alameda County, Kaiser Permanente has a special obligation to treat its communities with the same bedside manner as its patients,” said Alameda County Dist. Atty. Pamela Price. “Dumping medical waste and private information are wrong, which they have acknowledged. This action will hold them accountable in such a way that we hope means it doesn’t happen again.”

Saturday, September 9, 2023

Academics Raise More Than $315,000 for Data Bloggers Sued by Harvard Business School Professor Gino

Neil H. Shah & Claire Yuan
The Crimson
Originally published 1 Sept 23

A group of academics has raised more than $315,000 through a crowdfunding campaign to support the legal expenses of the professors behind data investigation blog Data Colada — who are being sued for defamation by Harvard Business School professor Francesca Gino.

Supporters of the three professors — Uri Simonsohn, Leif D. Nelson, and Joseph P. Simmons — launched the GoFundMe campaign to raise funds for their legal fees after they were named in a $25 million defamation lawsuit filed by Gino last month.

In a series of four blog posts in June, Data Colada gave a detailed account of alleged research misconduct by Gino across four academic papers. Two of the papers were retracted following the allegations by Data Colada, while another had previously been retracted in September 2021 and a fourth is set to be retracted in September 2023.

Organizers wrote on GoFundMe that the fundraiser “hit 2,000 donors and $250K in less than 2 days” and that Simonsohn, Nelson, and Simmons “are deeply moved and grateful for this incredible show of support.”

Simine Vazire, one of the fundraiser’s organizers, said she was “pleasantly surprised” by the reaction throughout academia in support of Data Colada.

“It’s been really nice to see the consensus among the academic community, which is strikingly different than what I see on LinkedIn and the non-academic community,” she said.

Elisabeth M. Bik — a data manipulation expert who also helped organize the fundraiser — credited the outpouring of financial support to solidarity and concern among scientists.

“People are very concerned about this lawsuit and about the potential silencing effect this could have on people who criticize other people’s papers,” Bik said. “I think a lot of people want to support Data Colada for their legal defenses.”

Andrew T. Miltenberg — one of Gino’s attorneys — wrote in an emailed statement that the lawsuit is “not an indictment on Data Colada’s mission.”

Monday, September 4, 2023

Amid Uncertainty About Francesca Gino’s Research, the Many Co-Authors Project Could Provide Clarity

Evan Nesterak
Behavioral Scientist
Originally posted 30 Aug 23

Here are two excerpts:

“The scientific literature must be cleansed of everything that is fraudulent, especially if it involves the work of a leading academic,” the committee wrote. “No more time and money must be wasted on replications or meta-analyses of fabricated data. Researchers’ and especially students’ too rosy view of the discipline, caused by such publications, should be corrected.”

Stapel’s modus operandi was creating fictitious datasets or tampering with existing ones that he would then “analyze” himself, or pass along to other scientists, including graduate students, as if they were real.

“When the fraud was first discovered, limiting the harm it caused for the victims was a matter of urgency,” the committee said. “This was particularly the case for Mr. Stapel’s former Ph.D. students and postdoctoral researchers, whose publications were suddenly becoming worthless.”

Why revisit the decade-old case of Stapel now? 

Because its echoes can be heard in the unfolding case of Harvard Business School Professor Francesca Gino as she faces allegations of data fraud, and her coauthors, colleagues, and the broader scientific community figure out how to respond. Listening to these echoes, especially those of the Stapel committee, helps put the Gino situation, and the efforts to remedy it, in greater perspective.

(cut)

“After a comprehensive evaluation that took 18 months from start to completion, the investigation committee—comprising three senior HBS colleagues—determined that research misconduct had occurred,” his email said. “After reviewing their detailed report carefully, I could come to no other conclusion, and I accepted their findings.”

He added: “I ultimately accepted the investigation committee’s recommended sanctions, which included immediately placing Professor Gino on administrative leave and correcting the scientific record.”

While it is unclear how the lawsuit will play out, many scientists have expressed concern about the chilling effects it might have on scientists’ willingness to come forward if they suspect research misconduct. 

“If the data are not fraudulent, you ought to be able to show that. If they are, but the fraud was done by someone else, name the person. Suing individual researchers for tens of millions of dollars is a brazen attempt to silence legitimate scientific criticism,” psychologist Yoel Inbar commented on Gino’s statement on Linkedin. 

It is this sentiment that led 13 behavioral scientists (some of whom have coauthored with Gino) to create a GoFundMe campaign on behalf of Simonsohn, Simmons, and Nelson to help raise money for their legal defense. 

Monday, August 21, 2023

Cigna Accused of Using AI, Not Doctors, to Deny Claims: Lawsuit

Steph Weber
Medscape.com
Originally posted 4 August 23

A new lawsuit alleges that Cigna uses artificial intelligence (AI) algorithms to inappropriately deny "hundreds or thousands" of claims at a time, bypassing legal requirements to complete individual claim reviews and forcing providers to bill patients in full.

In a complaint filed last week in California's eastern district court, plaintiffs and Cigna health plan members Suzanne Kisting-Leung and Ayesha Smiley and their attorneys say that Cigna violates state insurance regulations by failing to conduct a "thorough, fair, and objective" review of their and other members' claims.

The lawsuit says that instead, Cigna relies on an algorithm, PxDx, to review and frequently deny medically necessary claims. According to court records, the system allows Cigna's doctors to "instantly reject claims on medical grounds without ever opening patient files." With use of the system, the average claims processing time is 1.2 seconds.

Cigna says it uses technology to verify coding on standard, low-cost procedures and to expedite physician reimbursement. In a statement to CBS News, the company called the lawsuit "highly questionable."

The case highlights growing concerns about AI and its ability to replace humans for tasks and interactions in healthcare, business, and beyond. Public advocacy law firm Clarkson, which is representing the plaintiffs, has previously sued tech giants Google and ChatGPT creator OpenAI for harvesting internet users' personal and professional data to train their AI systems.

According to the complaint, Cigna denied the plaintiffs medically necessary tests, including bloodwork to screen for vitamin D deficiency and ultrasounds for patients suspected of having ovarian cancer. The plaintiffs' attempts to appeal were unfruitful, and they were forced to pay out of pocket.

(cut)

Last year, the American Medical Association and two state physician groups joined another class action against Cigna stemming from allegations that the insurer's intermediary, Multiplan, intentionally underpaid medical claims. And in March, Cigna's pharmacy benefit manager (PBM), Express Scripts, was accused of conspiring with other PBMs to drive up prescription drug prices for Ohio consumers, violating state antitrust laws.

Cohen says he expects Cigna to push back in court about the California class size, which the plaintiff's attorneys hope will encompass all Cigna health plan members in the state.

Sunday, August 20, 2023

When Scholars Sue Their Accusers. Francesca Gino is the Latest. Such Litigation Rarely Succeeds.

Adam Marcus and Ivan Oransky
The Chronicle of Higher Education
Originally posted 18 AUG 23

Francesca Gino has made headlines twice since June: once when serious allegations of misconduct involving her work became public, and again when she filed a $25-million lawsuit against her accusers, including Harvard University, where she is a professor at the business school.

The suit itself met with a barrage of criticism from those who worried that, as one scientist put it, it would have a “chilling effect on fraud detection.” A smaller number of people supported the move, saying that Harvard and her accusers had abandoned due process and that they believed in Gino’s integrity.How the case will play out, of course, remains to be seen. But Gino is hardly the first researcher to sue her critics and her employer when faced with misconduct findings. As the founders of Retraction Watch, a website devoted to covering problems in the scientific literature, we’ve reported many of these kinds of cases since we launched our blog in 2010. Platintiffs tend to claim defamation, but sometimes sue over wrongful termination or employment discrimination, and these kinds of cases typically end up in federal courts. A look at how some other suits fared might yield recommendations for how to limit the pain they can cause.The first thing to know about defamation and employment suits is that most plaintiffs, but not all, lose. Mario Saad, a diabetes researcher at Brazil’s Unicamp, found that out when he sued the American Diabetes Association in the very same federal district court in Massachusetts where Gino filed her case.Saad was trying to prevent Diabetes, the flagship research journal of the American Diabetes Association, from publishing expressions of concern about four of his papers following allegations of image manipulation. He lost that effort in 2015, and has now had 18 papers retracted.

(cut)

Such cases can be extremely expensive — not only for the defense, whether the costs are borne by institutions or insurance companies, but also for the plaintiffs. Ask Carlo Croce and Mark Jacobson.

Croce, a cancer researcher at Ohio State University, has at various points sued The New York Times, a Purdue University biologist named David Sanders, and Ohio State. He has lost all of those cases, including on appeal. The suits against the Times and Sanders claimed that a front-page story in 2017 that quoted Sanders had defamed Croce. His suit against Ohio State alleged that he had been improperly removed as department chair.

Croce racked up some $2 million in legal bills — and was sued for nonpayment. A judge has now ordered Croce’s collection of old masters paintings to be seized and sold for the benefit of his lawyers, and has also garnished Croce’s bank accounts. Another judgment means that his lawyers may now foreclose on his house to recoup their costs. Ohio State has been garnishing his wages since March by about $15,600 each month, or about a quarter of his paycheck. He continues to earn more than $800,000 per year from the university, even after a professorship and the chair were taken away from him.

When two researchers published a critique of the work of Mark Jacobson, an energy researcher at Stanford University, in the Proceedings of the National Academy of Sciences, Jacobson sued them along with the journal’s publisher for $10 million. He dropped the case just months after filing it.

But thanks to a so-called anti-SLAPP statute, “designed to provide for early dismissal of meritless lawsuits filed against people for the exercise of First Amendment rights,” a judge has ordered Jacobson to pay $500,000 in legal fees to the defendants. Jacobson wants Stanford to pay those costs, and California’s labor commissioner said the university had to pay at least some of them because protecting his reputation was part of Jacobson’s job. The fate of those fees, and who will pay them, is up in the air, with Jacobson once again appealing the judgment against him.

Wednesday, August 16, 2023

A Federal Judge Asks: Does the Supreme Court Realize How Bad It Smells?

Michael Ponsor
The New York Times: Opinion
Originally posted 14 July 23

What has gone wrong with the Supreme Court’s sense of smell?

I joined the federal bench in 1984, some years before any of the justices currently on the Supreme Court. Throughout my career, I have been bound and guided by a written code of conduct, backed by a committee of colleagues I can call on for advice. In fact, I checked with a member of that committee before writing this essay.

A few times in my nearly 40 years on the bench, complaints have been filed against me. This is not uncommon for a federal judge. So far, none have been found to have merit, but all of these complaints have been processed with respect, and I have paid close attention to them.

The Supreme Court has avoided imposing a formal ethical apparatus on itself like the one that applies to all other federal judges. I understand the general concern, in part. A complaint mechanism could become a political tool to paralyze the court or a playground for gadflies. However, a skillfully drafted code could overcome this problem. Even a nonenforceable code that the justices formally pledged to respect would be an improvement on the current void.

Reasonable people may disagree on this. The more important, uncontroversial point is that if there will not be formal ethical constraints on our Supreme Court — or even if there will be — its justices must have functioning noses. They must keep themselves far from any conduct with a dubious aroma, even if it may not breach a formal rule.

The fact is, when you become a judge, stuff happens. Many years ago, as a fairly new federal magistrate judge, I was chatting about our kids with a local attorney I knew only slightly. As our conversation unfolded, he mentioned that he’d been planning to take his 10-year-old to a Red Sox game that weekend but their plan had fallen through. Would I like to use his tickets?

Wednesday, August 9, 2023

The Moral Crisis of America’s Doctors

Wendy Dean & Elisabeth Rosenthal
The New York Times
Orignally posted 15 July 23

Here is an excerpt:

Some doctors acknowledged that the pressures of the system had occasionally led them to betray the oaths they took to their patients. Among the physicians I spoke to about this, a 45-year-old critical-care specialist named Keith Corl stood out. Raised in a working-class town in upstate New York, Corl was an idealist who quit a lucrative job in finance in his early 20s because he wanted to do something that would benefit people. During medical school, he felt inspired watching doctors in the E.R. and I.C.U. stretch themselves to the breaking point to treat whoever happened to pass through the doors on a given night. “I want to do that,” he decided instantly. And he did, spending nearly two decades working long shifts as an emergency physician in an array of hospitals, in cities from Providence to Las Vegas to Sacramento, where he now lives. Like many E.R. physicians, Corl viewed his job as a calling. But over time, his idealism gave way to disillusionment, as he struggled to provide patients with the type of care he’d been trained to deliver. “Every day, you deal with somebody who couldn’t get some test or some treatment they needed because they didn’t have insurance,” he said. “Every day, you’re reminded how savage the system is.”

Corl was particularly haunted by something that happened in his late 30s, when he was working in the emergency room of a hospital in Pawtucket, R.I. It was a frigid winter night, so cold you could see your breath. The hospital was busy. When Corl arrived for his shift, all of the facility’s E.R. beds were filled. Corl was especially concerned about an elderly woman with pneumonia who he feared might be slipping into sepsis, an extreme, potentially fatal immune response to infection. As Corl was monitoring her, a call came in from an ambulance, informing the E.R. staff that another patient would soon be arriving, a woman with severe mental health problems. The patient was familiar to Corl — she was a frequent presence in the emergency room. He knew that she had bipolar disorder. He also knew that she could be a handful. On a previous visit to the hospital, she detached the bed rails on her stretcher and fell to the floor, injuring a nurse.

In a hospital that was adequately staffed, managing such a situation while keeping tabs on all the other patients might not have been a problem. But Corl was the sole doctor in the emergency room that night; he understood this to be in part a result of cost-cutting measures (the hospital has since closed). After the ambulance arrived, he and a nurse began talking with the incoming patient to gauge whether she was suicidal. They determined she was not. But she was combative, arguing with the nurse in an increasingly aggressive tone. As the argument grew more heated, Corl began to fear that if he and the nurse focused too much of their attention on her, other patients would suffer needlessly and that the woman at risk of septic shock might die.

Corl decided he could not let that happen. Exchanging glances, he and the nurse unplugged the patient from the monitor, wheeled her stretcher down the hall, and pushed it out of the hospital. The blast of cold air when the door swung open caused Corl to shudder. A nurse called the police to come pick the patient up. (It turned out that she had an outstanding warrant and was arrested.) Later, after he returned to the E.R., Corl could not stop thinking about what he’d done, imagining how the medical-school version of himself would have judged his conduct. “He would have been horrified.”


Summary: The article explores the moral distress that many doctors are experiencing in the United States healthcare system. Doctors are feeling increasingly pressured to make decisions based on financial considerations rather than what is best for their patients. This is leading to a number of problems, including:
  • Decreased quality of care: Doctors are being forced to cut corners on care, which is leading to worse outcomes for patients.
  • Increased burnout: Doctors are feeling increasingly stressed and burned out, which is making it difficult for them to provide quality care.
  • Loss of moral compass: Doctors are feeling like they are losing their moral compass, as they are being forced to make decisions that they know are not in the best interests of their patients.
The article concludes by calling for a number of reforms to the healthcare system, including:
  • Paying doctors based on quality of care, not volume of services: This would incentivize doctors to provide the best possible care, rather than just the most profitable care.
  • Giving doctors more control over their practice:This would allow doctors to make decisions based on what is best for their patients, rather than what is best for their employers.
  • Supporting doctors' mental health: Doctors need to be supported through the challenges of providing care in the current healthcare system.

Sunday, August 6, 2023

Harvard professor accused of research fraud files defamation lawsuit against university, academics

Alex Koller
The Boston Globe
Originally posted 4 August 23

Here is an excerpt:

In the filing, Gino, a renowned behavioral scientist who studies the psychology of decisions, denied having ever falsified or fabricated data. She alleged that Harvard’s investigation into her work was unfair and biased.

The lawsuit alleges that the committee did not prove by a preponderance of the evidence that Gino “intentionally, knowingly, or recklessly” falsified or fabricated data, as Harvard policy required, and “ignored” exculpatory evidence. The suit also decries Data Colada’s posts as a “vicious, defamatory smear campaign.” The blog’s inquiries into Gino’s work initially sparked Harvard’s investigation.

In a statement posted to LinkedIn Wednesday, Gino refuted allegations against her and explained her decision to take legal action against Harvard and Data Colada.

“I want to be very clear: I have never, ever falsified data or engaged in research misconduct of any kind,” she wrote. “Today I had no choice but to file a lawsuit against Harvard University and members of the Data Colada group, who worked together to destroy my career and reputation despite admitting they have no evidence proving their allegations.”

She added that the university and authors “reached outrageous conclusions based entirely on inference, assumption, and implausible leaps of logic.”

The lawsuit accuses all of the defendants of defamation, and also accuses Harvard of gender discrimination, breach of contract, and bad faith and unfair dealing with Gino, who has been a tenured professor of business administration at Harvard since 2014.

Gino was first notified by Harvard of fraud allegations against her work in October 2021, according to the suit. She then learned that the university would conduct its own investigation in April 2022.

The filing alleges that Harvard’s investigation committee interviewed six of Gino’s collaborators and two research assistants, all of whom defended the integrity of Gino’s practices and said they had no evidence Gino had ever pressured anyone to produce a specific result.

Wednesday, July 26, 2023

Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions

Krügel, S., Ostermaier, A. & Uhl, M.
Philos. Technol. 35, 17 (2022).

Abstract

Departing from the claim that AI needs to be trustworthy, we find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data and when they learn information about it that warrants distrust. We conducted online experiments where the subjects took the role of decision-makers who received advice from an algorithm on how to deal with an ethical dilemma. We manipulated the information about the algorithm and studied its influence. Our findings suggest that AI is overtrusted rather than distrusted. We suggest digital literacy as a potential remedy to ensure the responsible use of AI.

Summary

Background: Artificial intelligence (AI) is increasingly being used to make ethical decisions. However, there is a concern that AI-powered advisors may not be trustworthy, due to factors such as bias and opacity.

Research question: The authors of this article investigated whether humans trust AI-powered advisors for ethical decisions, even when they know that the advisor is untrustworthy.

Methods: The authors conducted a series of experiments in which participants were asked to make ethical decisions with the help of an AI advisor. The advisor was either trustworthy or untrustworthy, and the participants were aware of this.

Results: The authors found that participants were more likely to trust the AI advisor, even when they knew that it was untrustworthy. This was especially true when the advisor was able to provide a convincing justification for its advice.

Conclusions: The authors concluded that humans are susceptible to "zombie trust" in AI-powered advisors. This means that we may trust AI advisors even when we know that they are untrustworthy. This is a concerning finding, as it could lead us to make bad decisions based on the advice of untrustworthy AI advisors.  By contrast, decision-makers do disregard advice from a human convicted criminal.

The article also discusses the implications of these findings for the development and use of AI-powered advisors. The authors suggest that it is important to make AI advisors more transparent and accountable, in order to reduce the risk of zombie trust. They also suggest that we need to educate people about the potential for AI advisors to be untrustworthy.

Tuesday, July 25, 2023

Inside the DeSantis Doc That Showtime Didn’t Want You to See

Roger Sollenberger
The Daily Beast
Originally posted 23 July 23

Here are two excerpts:

The documentary contrasts DeSantis’ account with those of two anonymous ex-prisoners, whom the transcript indicated were not represented in the flesh; their claims were delivered in “voice notes.”

“Officer DeSantis was one of the officers who oversaw the force-feeding and torture we were subjected to in 2006,” one former prisoner said. The second former detainee claimed that DeSantis was “one of the officers who mistreated us,” adding that DeSantis was “a bad person” and “a very bad officer.”

Over a view of “Camp X-Ray”—the now-abandoned section of Gitmo where DeSantis was stationed but has since fallen into disrepair—the narrator revealed that a VICE freedom of information request for the Florida governor’s active duty record returned “little about Guantanamo” outside of his arrival in March 2006.

But as the documentary noted, that period was “a brutal point in the prison’s history.”

Detainees had been on a prolonged hunger strike to call attention to their treatment, and the government’s solution was to force-feed prisoners Ensure dietary supplements through tubes placed in their noses. Detainees alleged the process caused excessive bleeding and was repeated “until they vomited and defecated on themselves.” (DeSantis, a legal adviser, would almost certainly have been aware that the UN concluded that force-feeding amounted to torture the month before he started working at Guantanamo.)

(cut)

The transcript then presented DeSantis’ own 2018 account of his role in the forced-feedings, when he told CBS News Miami that he had personally and professionally endorsed force-feeding as a legal way to break prisoner hunger strikes.

“The commander wants to know, well how do I combat this? So one of the jobs as a legal adviser will be like, ‘Hey, you actually can force feed, here’s what you can do, here’s kinda the rules of that,’” DeSantis said at the time.

DeSantis altered that language in a Piers Morgan interview this March, again invoking his junior rank as evidence that he would have lacked standing to order forced-feeding.

“There may have been a commander that would have done feeding if someone was going to die, but that was not something that I would have even had authority to do,” he said. However, DeSantis did not deny that he had provided that legal advice.


My thoughts:

I would like the see the documentary and make my own decision about its veracity.
  • The decision by Showtime to pull the episode is a significant one, as it suggests that the network is willing to censor its programming in order to avoid political controversy.
  • This is a worrying development, as it raises questions about the future of independent journalism in the United States.
  • If news organizations are afraid to air stories that are critical of powerful figures, then it will be much more difficult for the public to hold those figures accountable.
  • I hope that Showtime will reconsider its decision and allow the episode to air. The public has a right to know about the allegations against DeSantis, and it is important that these allegations be given a fair hearing.

Sunday, July 23, 2023

How to Use AI Ethically for Ethical Decision-Making

Demaree-Cotton, J., Earp, B. D., & Savulescu, J.
(2022). American Journal of Bioethics, 22(7), 1–3.

Here is an excerpt:

The  kind  of AI  proposed  by  Meier  and  colleagues  (2022) has  the  fascinating  potential to improve the transparency of ethical decision-making, at least if it is used as a decision aid rather than a decision replacement (Savulescu & Maslen 2015). While artificial intelligence cannot itself engage in the human communicative process of justifying its decisions to patients, the AI they describe (unlike “black-box” AI) makes explicit which values and principles are involved and how much weight they are given.

By contrast, the moral principles or values underlying human moral intuition are not always consciously, introspectively accessible (Cushman, Young, and  Hauser  2006).  While humans sometimes have a fuzzy, intuitive sense of some of the factors that are relevant to their moral judgment, we often have strong moral intuitions without being sure of their source,  or  with- out being clear on precisely how strongly different  factors  played  a  role in  generating  the intuitions.  But  if  clinicians  make use  of  the AI  as a  decision  aid, this  could help  them  to transparently and precisely communicate the actual reasons behind their decision.

This is so even if the AI’s recommendation is ultimately rejected. Suppose, for example, that the AI recommends a course of action, with a certain amount of confidence, and it specifies the exact values or  weights it has assigned  to  autonomy versus  beneficence  in coming  to  this conclusion. Evaluating the recommendation made by the AI could help a committee make more explicit the “black box” aspects  of their own reasoning.  For example, the committee might decide that beneficence should actually be  weighted more heavily in this case than the AI suggests. Being able to understand the reason that their decision diverges from that of the AI gives them the opportunity to offer a further justifying reason as to why they think beneficence should be given more weight;  and  this,  in  turn, could improve the  transparency of their recommendation. 

However, the potential for the kind of AI described in the target article to improve the accuracy of moral decision-making may be more limited. This is so for two reasons. Firstly, whether AI can be expected to outperform human decision-making depends in part on the metrics used to train it. In non-ethical domains, superior accuracy can be achieved because the “verdicts” given to the AI in the training phase are not solely the human judgments that the AI is intended to replace or inform. Consider how AI can learn to detect lung cancer from scans at a superior rate to human radiologists after being trained on large datasets and being “told” which scans show cancer and which ones are cancer-free. Importantly, this training includes cases where radiologists  did  not  recognize  cancer  in  the  early  scans  themselves,  but  where  further information verified the correct diagnosis later on (Ardila et al. 2019). Consequently, these AIs are  able  to  detect  patterns  even in  early  scans  that  are  not  apparent  or  easily detectable  by human radiologists, leading to superior accuracy compared to human performance.