Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Harm. Show all posts
Showing posts with label Harm. Show all posts

Saturday, April 20, 2024

The Dark Side of AI in Mental Health

Michael DePeau-Wilson
MedPage Today
Originally posted 11 April 24

With the rise in patient-facing psychiatric chatbots powered by artificial intelligence (AI), the potential need for patient mental health data could drive a boom in cash-for-data scams, according to mental health experts.

A recent example of controversial data collection appeared on Craigslist when a company called Therapy For All allegedly posted an advertisement offering money for recording therapy sessions without any additional information about how the recordings would be used.

The company's advertisement and website had already been taken down by the time it was highlighted by a mental health influencer on TikTokopens in a new tab or window. However, archived screenshots of the websiteopens in a new tab or window revealed the company was seeking recorded therapy sessions "to better understand the format, topics, and treatment associated with modern mental healthcare."

Their stated goal was "to ultimately provide mental healthcare to more people at a lower cost," according to the defunct website.

In service of that goal, the company was offering $50 for each recording of a therapy session of at least 45 minutes with clear audio of both the patient and their therapist. The company requested that the patients withhold their names to keep the recordings anonymous.


Here is a summary:

The article highlights several ethical concerns surrounding the use of AI in mental health care:

The lack of patient consent and privacy protections when companies collect sensitive mental health data to train AI models. For example, the nonprofit Koko used OpenAI's GPT-3 to experiment with online mental health support without proper consent protocols.

The issue of companies sharing patient data without authorization, as seen with the Crisis Text Line platform, which led to significant backlash from users.

The clinical risks of relying solely on AI-powered chatbots for mental health therapy, rather than having human clinicians involved. Experts warn this could be "irresponsible and ultimately dangerous" for patients dealing with complex, serious conditions.

The potential for unethical "cash-for-data" schemes, such as the Therapy For All company that sought to obtain recorded therapy sessions without proper consent, in order to train AI models.

Tuesday, April 16, 2024

As A.I.-Controlled Killer Drones Become Reality, Nations Debate Limits

Eric Lipton
The New York Times
Originally posted 21 Nov 23

Here is an excerpt:

Rapid advances in artificial intelligence and the intense use of drones in conflicts in Ukraine and the Middle East have combined to make the issue that much more urgent. So far, drones generally rely on human operators to carry out lethal missions, but software is being developed that soon will allow them to find and select targets more on their own.

The intense jamming of radio communications and GPS in Ukraine has only accelerated the shift, as autonomous drones can often keep operating even when communications are cut off.

“This isn’t the plot of a dystopian novel, but a looming reality,” Gaston Browne, the prime minister of Antigua and Barbuda, told officials at a recent U.N. meeting.

Pentagon officials have made it clear that they are preparing to deploy autonomous weapons in a big way.

Deputy Defense Secretary Kathleen Hicks announced this summer that the U.S. military would “field attritable, autonomous systems at scale of multiple thousands” in the coming two years, saying that the push to compete with China’s own investment in advanced weapons necessitated that the United States “leverage platforms that are small, smart, cheap and many.”

The concept of an autonomous weapon is not entirely new. Land mines — which detonate automatically — have been used since the Civil War. The United States has missile systems that rely on radar sensors to autonomously lock on to and hit targets.

What is changing is the introduction of artificial intelligence that could give weapons systems the capability to make decisions themselves after taking in and processing information.


Here is a summary:

This article discusses the debate at the UN regarding Lethal Autonomous Weapons (LAW) - essentially autonomous drones with AI that can choose and attack targets without human intervention. There are concerns that this technology could lead to unintended casualties, make wars more likely, and remove the human element from the decision to take a life.
  • Many countries are worried about the development and deployment of LAWs.
  • Austria and other countries are proposing a total ban on LAWs or at least strict regulations requiring human control and limitations on how they can be used.
  • The US, Russia, and China are opposed to a ban and argue that LAWs could potentially reduce civilian casualties in wars.
  • The US prefers non-binding guidelines over new international laws.
  • The UN is currently deadlocked on the issue with no clear path forward for creating regulations.

Monday, April 15, 2024

On the Ethics of Chatbots in Psychotherapy.

Benosman, M. (2024, January 7).
PsyArXiv Preprints
https://doi.org/10.31234/osf.io/mdq8v

Introduction:

In recent years, the integration of chatbots in mental health care has emerged as a groundbreaking development. These artificial intelligence (AI)-driven tools offer new possibilities for therapy and support, particularly in areas where mental health services are scarce or stigmatized. However, the use of chatbots in this sensitive domain raises significant ethical concerns that must be carefully considered. This essay explores the ethical implications of employing chatbots in mental health, focusing on issues of non-maleficence, beneficence, explicability, and care. Our main ethical question is: should we trust chatbots with our mental health and wellbeing?

Indeed, the recent pandemic has made mental health an urgent global problem. This fact, together with the widespread shortage in qualified human therapists, makes the proposal of chatbot therapists a timely, and perhaps, viable alternative. However, we need to be cautious about hasty implementations of such alternative. For instance, recent news has reported grave incidents involving chatbots-human interactions. For example, (Walker, 2023) reports the death of an eco-anxious man who committed suicide following a prolonged interaction with a chatbot named ELIZA, which encouraged him to put an end to his life to save the planet. Another individual was caught while executing a plan to assassinate the Queen of England, after a chatbot encouraged him to do so (Singleton, Gerken, & McMahon, 2023).

These are only a few recent examples that demonstrate the potential maleficence effect of chatbots on-fragile-individuals. Thus, to be ready to safely deploy such technology, in the context of mental health care, we need to carefully study its potential impact on patients from an ethics standpoint.


Here is my summary:

The article analyzes the ethical considerations around the use of chatbots as mental health therapists, from the perspectives of different stakeholders - bioethicists, therapists, and engineers. It examines four main ethical values:

Non-maleficence: Ensuring chatbots do not cause harm, either accidentally or deliberately. There is agreement that chatbots need rigorous evaluation and regulatory oversight like other medical devices before clinical deployment.

Beneficence: Ensuring chatbots are effective at providing mental health support. There is a need for evidence-based validation of their efficacy, while also considering broader goals like improving quality of life.

Explicability: The need for transparency and accountability around how chatbot algorithms work, so patients can understand the limitations of the technology.

Care: The inability of chatbots to truly empathize, which is a crucial aspect of effective human-based psychotherapy. This raises concerns about preserving patient autonomy and the risk of manipulation.

Overall, the different stakeholders largely agree on the importance of these ethical values, despite coming from different backgrounds. The text notes a surprising level of alignment, even between the more technical engineering perspective and the more humanistic therapist and bioethicist viewpoints. The key challenge seems to be ensuring chatbots can meet the high bar of empathy and care required for effective mental health therapy.

Wednesday, April 10, 2024

Why the world cannot afford the rich

R. G. Wilkinson & K. E. Pickett
Nature.com
Originally published 12 March 24

Here is an excerpt:

Inequality also increases consumerism. Perceived links between wealth and self-worth drive people to buy goods associated with high social status and thus enhance how they appear to others — as US economist Thorstein Veblen set out more than a century ago in his book The Theory of the Leisure Class (1899). Studies show that people who live in more-unequal societies spend more on status goods14.

Our work has shown that the amount spent on advertising as a proportion of gross domestic product is higher in countries with greater inequality. The well-publicized lifestyles of the rich promote standards and ways of living that others seek to emulate, triggering cascades of expenditure for holiday homes, swimming pools, travel, clothes and expensive cars.

Oxfam reports that, on average, each of the richest 1% of people in the world produces 100 times the emissions of the average person in the poorest half of the world’s population15. That is the scale of the injustice. As poorer countries raise their material standards, the rich will have to lower theirs.

Inequality also makes it harder to implement environmental policies. Changes are resisted if people feel that the burden is not being shared fairly. For example, in 2018, the gilets jaunes (yellow vests) protests erupted across France in response to President Emmanuel Macron’s attempt to implement an ‘eco-tax’ on fuel by adding a few percentage points to pump prices. The proposed tax was seen widely as unfair — particularly for the rural poor, for whom diesel and petrol are necessities. By 2019, the government had dropped the idea. Similarly, Brazilian truck drivers protested against rises in fuel tax in 2018, disrupting roads and supply chains.

Do unequal societies perform worse when it comes to the environment, then? Yes. For rich, developed countries for which data were available, we found a strong correlation between levels of equality and a score on an index we created of performance in five environmental areas: air pollution; recycling of waste materials; the carbon emissions of the rich; progress towards the United Nations Sustainable Development Goals; and international cooperation (UN treaties ratified and avoidance of unilateral coercive measures).


The article argues that rising economic inequality is a major threat to the world's well-being. Here are the key points:

The rich are capturing a growing share of wealth: The richest 1% are accumulating wealth much faster than everyone else, and their lifestyles contribute heavily to environmental damage.

Inequality harms everyone: High levels of inequality are linked to social problems like crime, mental health issues, and lower social mobility. It also makes it harder to address environmental challenges because people resist policies seen as unfair.

More equal societies perform better: Countries with a more even distribution of wealth tend to have better social and health outcomes, as well as stronger environmental performance.

Policymakers need to take action: The article proposes progressive taxation, closing tax havens, and encouraging more equitable business practices like employee ownership.

The overall message is that reducing inequality is essential for solving a range of environmental, social, and health problems.

Thursday, April 4, 2024

Ready or not, AI chatbots are here to help with Gen Z’s mental health struggles

Matthew Perrone
AP.com
Originally posted 23 March 24

Here is an excerpt:

Earkick is one of hundreds of free apps that are being pitched to address a crisis in mental health among teens and young adults. Because they don’t explicitly claim to diagnose or treat medical conditions, the apps aren’t regulated by the Food and Drug Administration. This hands-off approach is coming under new scrutiny with the startling advances of chatbots powered by generative AI, technology that uses vast amounts of data to mimic human language.

The industry argument is simple: Chatbots are free, available 24/7 and don’t come with the stigma that keeps some people away from therapy.

But there’s limited data that they actually improve mental health. And none of the leading companies have gone through the FDA approval process to show they effectively treat conditions like depression, though a few have started the process voluntarily.

“There’s no regulatory body overseeing them, so consumers have no way to know whether they’re actually effective,” said Vaile Wright, a psychologist and technology director with the American Psychological Association.

Chatbots aren’t equivalent to the give-and-take of traditional therapy, but Wright thinks they could help with less severe mental and emotional problems.

Earkick’s website states that the app does not “provide any form of medical care, medical opinion, diagnosis or treatment.”

Some health lawyers say such disclaimers aren’t enough.


Here is my summary:

AI chatbots can provide personalized, 24/7 mental health support and guidance to users through convenient mobile apps. They use natural language processing and machine learning to simulate human conversation and tailor responses to individual needs.

 This can be especially beneficial for those who face barriers to accessing traditional in-person therapy, such as cost, location, or stigma.

Research has shown that AI chatbots can be effective in reducing the severity of mental health issues like anxiety, depression, and stress for diverse populations.  They can deliver evidence-based interventions like cognitive behavioral therapy and promote positive psychology.  Some well-known examples include Wysa, Woebot, Replika, Youper, and Tess.

However, there are also ethical concerns around the use of AI chatbots for mental health. There are risks of providing inadequate or even harmful support if the chatbot cannot fully understand the user's needs or respond empathetically. Algorithmic bias in the training data could also lead to discriminatory advice. It's crucial that users understand the limitations of the therapeutic relationship with an AI chatbot versus a human therapist.

Overall, AI chatbots have significant potential to expand access to mental health support, but must be developed and deployed responsibly with strong safeguards to protect user wellbeing. Continued research and oversight will be needed to ensure these tools are used effectively and ethically.

Tuesday, March 19, 2024

As guns rise to leading cause of death among US children, research funding to help prevent and protect victims lags

Deidre McPhillips
CNN.org
Originally posted 7 Feb 24

More children die from guns than anything else in the United States, but relatively little funding is available to study how to prevent these tragedies.

From 2008 to 2017, about $12 million in federal research awards were granted to study pediatric firearm mortality each year – about $600 per life lost, according to a study published in Health Affairs. Motor vehicle crashes, the leading cause of death among children at the time, received about $26,000 of research funding per death, while funding to study pediatric cancer, the third leading cause of death, topped $195,000 per death.

By 2020, firearm deaths in the US had reached record levels and guns had surpassed car crashes to become the leading cause of death among children. More than 4,300 children and teens died from guns in 2020, according to data from the US Centers for Disease Control and Prevention – a 27% jump from 2017, and a number that has only continued to rise. But federal dollars haven’t followed proportionately.

Congress has earmarked about $25 million for firearm injury prevention research each year since 2020, split evenly between the CDC and the National Institutes of Health. Even if all of those dollars were spent on studies focused on pediatric deaths from firearm injury, it’d still be less than $6,000 per death.


The article highlights the critical need for increased research funding to prevent firearm-related deaths among children and teens in the U.S. Despite guns becoming the leading cause of death in this demographic, research funding remains insufficient. This lack of investment hinders the development of life-saving solutions and policies to address gun violence effectively. To protect our youth and combat this pressing issue, substantial and sustained funding for research on gun violence prevention is imperative.

Or, we could have more sensible gun laws to protect children and adolescents.

Tuesday, February 20, 2024

Understanding Liability Risk from Using Health Care Artificial Intelligence Tools

Mello, M. M., & Guha, N. (2024).
The New England journal of medicine, 390(3), 271–278. https://doi.org/10.1056/NEJMhle2308901

Optimism about the explosive potential of artificial intelligence (AI) to transform medicine is tempered by worry about what it may mean for the clinicians being "augmented." One question is especially problematic because it may chill adoption: when Al contributes to patient injury, who will be held responsible?

Some attorneys counsel health care organizations with dire warnings about liability1 and dauntingly long lists of legal concerns. Unfortunately, liability concern can lead to overly conservative decisions, including reluctance to try new things. Yet, older forms of clinical decision support provided important opportunities to prevent errors and malpractice claims. Given the slow progress in reducing diagnostic errors, not adopting new tools also has consequences and at some point may itself become malpractice. Liability uncertainty also affects Al developers' cost of capital and incentives to develop particular products, thereby influencing which Al innovations become available and at what price.

To help health care organizations and physicians weigh Al-related liability risk against the benefits of adoption, we examine the issues that courts have grappled with in cases involving software error and what makes them so challenging. Because the signals emerging from case law remain somewhat faint, we conducted further analysis of the aspects of Al tools that elevate or mitigate legal risk. Drawing on both analyses, we provide risk-management recommendations, focusing on the uses of Al in direct patient care with a "human in the loop" since the use of fully autonomous systems raises additional issues.

(cut)

The Awkward Adolescence of Software-Related Liability

Legal precedent regarding Al injuries is rare because Al models are new and few personal-injury claims result in written opinions. As this area of law matures, it will confront several challenges.

Challenges in Applying Tort Law Principles to Health Care Artificial Intelligence (AI).

Ordinarily, when a physician uses or recommends a product and an injury to the patient results, well-established rules help courts allocate liability among the physician, product maker, and patient. The liabilities of the physician and product maker are derived from different standards of care, but for both kinds of defendants, plaintiffs must show that the defendant owed them a duty, the defendant breached the applicable standard of care, and the breach caused their injury; plaintiffs must also rebut any suggestion that the injury was so unusual as to be outside the scope of liability.

The article is paywalled, which is not how this should work.

Friday, February 16, 2024

Citing Harms, Momentum Grows to Remove Race From Clinical Algorithms

B. Kuehn
JAMA
Published Online: January 17, 2024.
doi:10.1001/jama.2023.25530

Here is an excerpt:

The roots of the false idea that race is a biological construct can be traced to efforts to draw distinctions between Black and White people to justify slavery, the CMSS report notes. For example, the third US president, Thomas Jefferson, claimed that Black people had less kidney output, more heat tolerance, and poorer lung function than White individuals. Louisiana physician Samuel Cartwright, MD, subsequently rationalized hard labor as a way for slaves to fortify their lungs. Over time, the report explains, the medical literature echoed some of those ideas, which have been used in ways that cause harm.

“It is mind-blowing in some ways how deeply embedded in history some of this misinformation is,” Burstin said.

Renewed recognition of these harmful legacies and growing evidence of the potential harm caused by structural racism, bias, and discrimination in medicine have led to reconsideration of the use of race in clinical algorithms. The reckoning with racial injustice sparked by the May 2020 murder of George Floyd helped accelerate this work. A few weeks after Floyd’s death, an editorial in the New England Journal of Medicine recommended reconsidering race in 13 clinical algorithms, echoing a growing chorus of medical students and physicians arguing for change.

Congress also got involved. As a Robert Wood Johnson Foundation Health Policy Fellow, Michelle Morse, MD, MPH, raised concerns about the use of race in clinical algorithms to US Rep Richard Neal (D, MA), then chairman of the House Ways and Means Committee. Neal in September 2020 sent letters to several medical societies asking them to assess racial bias and a year later he and his colleagues issued a report on the misuse of race in clinical decision-making tools.

“We need to have more humility in medicine about the ways in which our history as a discipline has actually held back health equity and racial justice,” Morse said in an interview. “The issue of racism and clinical algorithms is one really tangible example of that.”


My summary: There's increasing worry that using race in clinical algorithms can be harmful and perpetuate racial disparities in healthcare. This concern stems from a recognition of the historical harms of racism in medicine and growing evidence of bias in algorithms.

A review commissioned by the Agency for Healthcare Research and Quality (AHRQ) found that using race in algorithms can exacerbate health disparities and reinforce the false idea that race is a biological factor.

Several medical organizations and experts have called for reevaluating the use of race in clinical algorithms. Some argue that race should be removed altogether, while others advocate for using it only in specific cases where it can be clearly shown to improve outcomes without causing harm.

Wednesday, February 14, 2024

Responding to Medical Errors—Implementing the Modern Ethical Paradigm

T. H. Gallagher &  A. Kachalia
The New England Journal of Medicine
January 13, 2024
DOI: 10.1056/NEJMp2309554

Here are some excerpts:

Traditionally, recommendations regarding responding to medical errors focused mostly on whether to disclose mistakes to patients. Over time, empirical research, ethical analyses, and stakeholder engagement began to inform expectations - which are now embodied in communication and resolution programs (CRPS) — for how health care professionals and organizations should respond not just to errors but any time patients have been harmed by medical care (adverse events). CRPs require several steps: quickly detecting adverse events, communicating openly and empathetically with patients and families about the event, apologizing and taking responsibility for errors, analyzing events and redesigning processes to prevent recurrences, supporting patients and clinicians, and proactively working with patients toward reconciliation. In this modern ethical paradigm, any time harm occurs, clinicians and health care organizations are accountable for minimizing suffering and promoting learning. However, implementing this ethical paradigm is challenging, especially when the harm was due to an error.

Historically, the individual physician was deemed the "captain of the ship," solely accountable for patient outcomes. Bioethical analyses emphasized the fiduciary nature of the doctor-patient relationship (i.e., doctors are in a position of greater knowledge and power) and noted that telling patients...about harmful errors supported patient autonomy and facilitated informed consent for future decisions. However, under U.S. tort law, physicians and organizations can be held accountable and financially liable for damages when they make negligent errors. As a result, ethical recommendations for openness were drowned out by fears of lawsuits and payouts, leading to a "deny and defend" response. Several factors initiated a paradigm shift. In the early 2000s, reports from the Institute of Medicine transformed the way the health care profession conceptualized patient safety.1 The imperative became creating cultures of safety that encouraged everyone to report errors to enable learning and foster more reliable systems. Transparency assumed greater importance, since you cannot fix problems you don't know about. The ethical imperative for openness was further supported when rising consumerism made it clear that patients expected responses to harm to include disclosure of what happened, an apology, reconciliation, and organizational learning.

(cut)

CRP Model for Responding to Harmful Medical Errors

Research has been critical to CRP expansion. Several studies have demonstrated that CRPs can enjoy physician support and operate without increasing liability risk. Nonetheless, research also shows that physicians remain concerned about their ability to communicate with patients and families after a harmful error and worry about liability risks including being sued, having their malpractice premiums raised, and having the event reported to the National Practitioner Data Bank (NPDB).5 Successful CRPS typically deploy a formal team, prioritize clinician and leadership buy-in, and engage liability insurers in their efforts. The table details the steps associated with the CRP model, the ethical rationale for each step, barriers to implementation, and strategies for overcoming them.

The growth of CRPs also reflects collaboration among diverse stakeholder groups, including patient advocates, health care organizations, plaintiff and defense attorneys, liability insurers, state medical associations, and legislators. Sustained stakeholder engagement that respects the diverse perspectives of each group has been vital, given the often opposing views these groups have espoused.
As CRPS proliferate, it will be important to address a few key challenges and open questions in implementing this ethical paradigm.


The article provides a number of recommendations for how healthcare providers can implement these principles. These include:
  • Developing open and honest communication with patients.
  • Providing timely and accurate information about the error.
  • Offering apologies and expressing empathy for the harm that has been caused.
  • Working with patients to develop a plan to address the consequences of the error.
  • Conducting a thorough investigation of the error to identify the root causes and prevent future errors.
  • Sharing the results of the investigation with patients and the public.

Tuesday, February 13, 2024

Majority of debtors to US hospitals now people with health insurance

Jessica Glenza
The Guardian
Originally posted 11 Jan 24

People with health insurance may now represent the majority of debtors American hospitals struggle to collect from, according to medical billing analysts.

This marks a sea change from just a few years ago, when people with health insurance represented only about one in 10 bills hospitals considered “bad debt”, analysts said.

“We always used to consider bad debt, especially bad debt write-offs from a hospital perspective, those [patients] that have the ability to pay but don’t,” said Colleen Hall, senior vice-president for Kodiak Solutions, a billing, accounting and consulting firm that works closely with hospitals and performed the analysis.

“Now, it’s not as if these patients across the board are even able to pay, because [out-of-pocket costs are] such an astronomical amount related to what their general income might be.”

Although “bad debt” can be a controversial metric in its own right, those who work in the hospital billing industry say it shows how complex health insurance products with large out-of-pocket costs have proliferated.

“What we noticed was a breaking point right around the 2018-2019 timeframe,” said Matt Szaflarski, director of revenue cycle intelligence at Kodiak Solutions. The trend has since stabilized, but remains at more than half of all “bad debt”.

In 2018, just 11.1% of hospitals’ bad debt came from insured “self-pay” accounts, or from patients whose insurance required out-of-pocket payments, according to Kodiak. By 2022, the proportion who did (or could) not pay their bills soared to 57.6% of all hospitals’ bad debt.


The US Healthcare system needs to be fixed:

Not all health insurance plans are created equal. Many plans have narrow networks and limited coverage, leaving patients responsible for costs associated with out-of-network providers or specialized care. This can be particularly detrimental for people with chronic conditions or those requiring emergency care.

Medical debt can have a devastating impact on individuals and families. It can lead to financial hardship, delayed or foregone care, damage to credit scores, and even bankruptcy. This can have long-term consequences for physical and mental health, employment opportunities, and overall well-being.

Fixing the US healthcare system is a complex challenge, but it is essential to ensure that everyone has access to affordable, quality healthcare without fear of financial ruin. 

Friday, February 2, 2024

Young people turning to AI therapist bots

Joe Tidy
BBC.com
Originally posted 4 Jan 24

Here is an excerpt:

Sam has been so surprised by the success of the bot that he is working on a post-graduate research project about the emerging trend of AI therapy and why it appeals to young people. Character.ai is dominated by users aged 16 to 30.

"So many people who've messaged me say they access it when their thoughts get hard, like at 2am when they can't really talk to any friends or a real therapist,"
Sam also guesses that the text format is one with which young people are most comfortable.
"Talking by text is potentially less daunting than picking up the phone or having a face-to-face conversation," he theorises.

Theresa Plewman is a professional psychotherapist and has tried out Psychologist. She says she is not surprised this type of therapy is popular with younger generations, but questions its effectiveness.

"The bot has a lot to say and quickly makes assumptions, like giving me advice about depression when I said I was feeling sad. That's not how a human would respond," she said.

Theresa says the bot fails to gather all the information a human would and is not a competent therapist. But she says its immediate and spontaneous nature might be useful to people who need help.
She says the number of people using the bot is worrying and could point to high levels of mental ill health and a lack of public resources.


Here are some important points-

Reasons for appeal:
  • Cost: Traditional therapy's expense and limited availability drive some towards bots, seen as cheaper and readily accessible.
  • Stigma: Stigma associated with mental health might make bots a less intimidating first step compared to human therapists.
  • Technology familiarity: Young people, comfortable with technology, find text-based interaction with bots familiar and less daunting than face-to-face sessions.
Concerns and considerations:
  • Bias: Bots trained on potentially biased data might offer inaccurate or harmful advice, reinforcing existing prejudices.
  • Qualifications: Lack of professional mental health credentials and oversight raises concerns about the quality of support provided.
  • Limitations: Bots aren't replacements for human therapists. Complex issues or severe cases require professional intervention.

Friday, December 29, 2023

A Hybrid Account of Harm

Unruh, C. F. (2022).
Australasian Journal of Philosophy, 1–14.
https://doi.org/10.1080/00048402.2022.2048401

Abstract

When does a state of affairs constitute a harm to someone? Comparative accounts say that being worse off constitutes harm. The temporal version of the comparative account is seldom taken seriously, due to apparently fatal counterexamples. I defend the temporal version against these counterexamples, and show that it is in fact more plausible than the prominent counterfactual version of the account. Non-comparative accounts say that being badly off constitutes harm. However, neither the temporal comparative account nor the non-comparative account can correctly classify all harms. I argue that we should combine them into a hybrid account of harm. The hybrid account is extensionally adequate and presents a unified view on the nature of harm.


Here's my take:

Charlotte Unruh proposes a new way of thinking about harm. Unruh argues that neither the traditional comparative account nor the non-comparative account of harm can adequately explain all cases of harm. The comparative account says that harm consists in being worse off than one would have been had some event not occurred. The non-comparative account says that harm consists in being in a bad state, regardless of how one would have fared otherwise.

Unruh proposes a hybrid account of harm that combines elements of both the comparative and non-comparative accounts. She says that an agent suffers harm if and only if either (i) the agent suffers ill-being or (ii) the agent's well-being is lower than it was before. This hybrid account is able to explain cases of harm that cannot be explained by either the comparative or non-comparative account alone. For example, the hybrid account explains why it is harmful to prevent someone from achieving a good that they would have otherwise achieved, even if the person is still in a good state overall.

Unruh's hybrid account of harm has a number of advantages over other accounts of harm. It is extensionally adequate, meaning that it correctly classifies all cases of harm as harmful and all cases of non-harm as non-harmful. It is also normatively plausible, meaning that it accords with our intuitions about what counts as harm. Additionally, the hybrid account is able to explain a number of different phenomena related to harm, such as the severity of harm, the distribution of harm, and the compensation for harm.

Wednesday, November 8, 2023

Everything you need to know about artificial wombs

Cassandra Willyard
MIT Technology Review
Originally posted 29 SEPT 23

Here is an excerpt:

What is an artificial womb?

An artificial womb is an experimental medical device intended to provide a womblike environment for extremely premature infants. In most of the technologies, the infant would float in a clear “biobag,” surrounded by fluid. The idea is that preemies could spend a few weeks continuing to develop in this device after birth, so that “when they’re transitioned from the device, they’re more capable of surviving and having fewer complications with conventional treatment,” says George Mychaliska, a pediatric surgeon at the University of Michigan.

One of the main limiting factors for survival in extremely premature babies is lung development. Rather than breathing air, babies in an artificial womb would have their lungs filled with lab-made amniotic fluid, that mimics the amniotic fluid they would have hadjust like they would in utero. Neonatologists would insert tubes into blood vessels in the umbilical cord so that the infant’s blood could cycle through an artificial lung to pick up oxygen. 

The device closest to being ready to be tested in humans, called the EXTrauterine Environment for Newborn Development, or EXTEND, encases the baby in a container filled with lab-made amniotic fluid. It was invented by Alan Flake and Marcus Davey at the Children’s Hospital of Philadelphia and is being developed by Vitara Biomedical.


Here is my take:

Artificial wombs are experimental medical devices that aim to provide a womb-like environment for extremely premature infants. The technology is still in its early stages of development, but it has the potential to save the lives of many babies who would otherwise not survive.

Overall, artificial wombs are a promising new technology with the potential to revolutionize the care of premature infants. However, more research is needed to fully understand the risks and benefits of the technology before it can be widely used.

Here are some additional ethical concerns that have been raised about artificial wombs:
  • The potential for artificial wombs to be used to create designer babies or to prolong the lives of fetuses with severe disabilities.
  • The potential for artificial wombs to be used to exploit or traffick babies.
  • The potential for artificial wombs to exacerbate existing social and economic inequalities.
It is important to have a public conversation about these ethical concerns before artificial wombs become widely available. We need to develop clear guidelines for how the technology should be used and ensure that it is used in a way that benefits all of society.

Tuesday, October 10, 2023

The Moral Case for No Longer Engaging With Elon Musk’s X

David Lee
Bloomberg.com
Originally published 5 October 23

Here is an excerpt:

Social networks are molded by the incentives presented to users. In the same way we can encourage people to buy greener cars with subsidies or promote healthy living by giving out smartwatches, so, too, can levers be pulled to improve the health of online life. Online, people can’t be told what to post, but sites can try to nudge them toward behaving in a certain manner, whether through design choices or reward mechanisms.

Under the previous management, Twitter at least paid lip service to this. In 2020, it introduced a feature that encouraged people to actually read articles before retweeting them, for instance, to promote “informed discussion.” Jack Dorsey, the co-founder and former chief executive officer, claimed to be thinking deeply about improving the quality of conversations on the platform — seeking ways to better measure and improve good discourse online. Another experiment was hiding the “likes” count in an attempt to train away our brain’s yearn for the dopamine hit we get from social engagement.

One thing the prior Twitter management didn’t do is actively make things worse. When Musk introduced creator payments in July, he splashed rocket fuel over the darkest elements of the platform. These kinds of posts always existed, in no small number, but are now the despicable main event. There’s money to be made. X’s new incentive structure has turned the site into a hive of so-called engagement farming — posts designed with the sole intent to elicit literally any kind of response: laughter, sadness, fear. Or the best one: hate. Hate is what truly juices the numbers.

The user who shared the video of Carson’s attack wasn’t the only one to do it. But his track record on these kinds of posts, and the inflammatory language, primed it to be boosted by the algorithm. By Tuesday, the user was still at it, making jokes about Carson’s girlfriend. All content monetized by advertising, which X desperately needs. It’s no mistake, and the user’s no fringe figure. In July, he posted that the site had paid him more than $16,000. Musk interacts with him often.


Here's my take: 

Lee pointed out that social networks can shape user behavior through incentives, and the previous management of Twitter had made some efforts to promote healthier online interactions. However, under Elon Musk's management, the platform has taken a different direction, actively encouraging provocative and hateful content to boost engagement.

Lee criticized the new incentive structure on X, where users are financially rewarded for producing controversial content. They argued that as the competition for attention intensifies, the content will likely become more violent and divisive.

Lee also mentioned an incident involving former executive Yoel Roth, who raised concerns about hate speech on the platform, and Musk's dismissive response to those concerns.  Musk is not a business genius and does not understand how to promote a healthy social media site.

Monday, October 2, 2023

Research: How One Bad Employee Can Corrupt a Whole Team

Stephen Dimmock & William Gerken
Harvard Business Review
Originally posted 5 March 2018

Here is an excerpt:

In our research, we wanted to understand just how contagious bad behavior is. To do so, we examined peer effects in misconduct by financial advisors, focusing on mergers between financial advisory firms that each have multiple branches. In these mergers, financial advisors meet new co-workers from one of the branches of the other firm, exposing them to new ideas and behaviors.

We collected an extensive data set using the detailed regulatory filings available for financial advisors. We defined misconduct as customer complaints for which the financial advisor either paid a settlement of at least $10,000 or lost an arbitration decision. We observed when complaints occurred for each financial advisor, as well as for the advisor’s co-workers.

We found that financial advisors are 37% more likely to commit misconduct if they encounter a new co-worker with a history of misconduct. This result implies that misconduct has a social multiplier of 1.59 — meaning that, on average, each case of misconduct results in an additional 0.59 cases of misconduct through peer effects.

However, observing similar behavior among co-workers does not explain why this similarity occurs. Co-workers could behave similarly because of peer effects – in which workers learn behaviors or social norms from each other — but similar behavior could arise because co-workers face the same incentives or because individuals prone to making similar choices naturally choose to work together.

In our research, we wanted to understand how peer effects contribute to the spread of misconduct. We compared financial advisors across different branches of the same firm, because this allowed us to control for the effect of the incentive structure faced by all advisors in the firm. We also focused on changes in co-workers caused by mergers, because this allowed us to remove the effect of advisors choosing their co-workers. As a result, we were able to isolate peer effects.


Here is my summary: 

The article discusses a study that found that even the most honest employees are more likely to commit misconduct if they work alongside a dishonest individual. The study, which was conducted by researchers at the University of California, Irvine, found that financial advisors were 37% more likely to commit misconduct if they encountered a new co-worker with a history of misconduct.

The researchers believe that this is because people are more likely to learn bad behavior than good behavior. When we see someone else getting away with misconduct, it can make us think that it's okay to do the same thing. Additionally, when we're surrounded by people who are behaving badly, it can create a culture of acceptance for misconduct.

Thursday, September 28, 2023

US prison labor is cruel and pointless legalized slavery.

Dyjuan Tatro
The Guardian
Originally posted 22 Sept 23

Here is an excerpt:

It costs New York around $70,000 a year in taxpayer money to imprison someone. It costs the BPI about $10,000 a year to educate an incarcerated student. New York’s recidivism rate is 40%, while graduates of the BPI and similar programs recidivate at only 4%, a tenfold decrease. Yet, despite its clear positive record, only 300 of New York’s 30,000 incarcerated people are enrolled at the BPI in any given semester. I was one of a lucky few.

Prisons are designed to warehouse, traumatize and exploit people, then send them back home in worse shape than when they entered the system. Despite having worked every day, the vast majority of people are released with no job experience, no references and no hope. Some would take this to mean that the system is failing. And it is with regard to public safety, rehabilitation and justice, but it’s horrifyingly successful at two things: guaranteeing jobs for some and perpetuating slavery for others.

Over the years, I learned that prison officials were not interested in giving us fruitful educational and job opportunities that allowed us to go home and stay home. The reality is much more sinister. Prisons are a job program for officers that requires us to keep coming back.


Here is my summary:

The article is a personal account of the author's experience working in prison. Tatro argues that prison labor is a form of legalized slavery, and that it is cruel and pointless. He writes that his work in prison was meaningless and dehumanizing, and that it did not teach him any skills or prepare him for life outside of prison. He also argues that prison labor undermines the living standards of workers outside of prison, as businesses that use prison labor are able to pay their workers less.

Tatro's article is a powerful indictment of the US prison system, and it raises important questions about the role of labor in the rehabilitation of prisoners.

Tuesday, September 26, 2023

I Have a Question for the Famous People Who Have Tried to Apologize

Elizabeth Spiers
The New York Times - Guest Opinion
Originally posted 22 September 23

Here is an excerpt:

As a talk show host, Ms. Barrymore has been lauded in part for her empathy. She is vulnerable, and that makes her guests feel like they can be, too. But even nice people can be self-centered when they’re on the defensive. That’s what happened when people objected to the news that her show would return to production despite the writers’ strike. In a teary, rambling video on Instagram, which was later deleted, she spoke about how hard the situation had been — for her. “I didn’t want to hide behind people. So I won’t. I won’t polish this with bells and whistles and publicists and corporate rhetoric. I’ll just stand out there and accept and be responsible.” (Ms. Barrymore’s awkward, jumbled sentences unwittingly demonstrated how dearly she needs those writers.) Finally, she included a staple of the public figure apology genre: “My intentions have never been in a place to upset or hurt anyone,” she said. “It’s not who I am.”

“This is not who I am” is a frequent refrain from people who are worried that they’re going to be defined by their worst moments. It’s an understandable concern, given the human tendency to pay more attention to negative events. People are always more than the worst thing they’ve done. But it’s also true that the worst things they’ve done are part of who they are.

Somehow, Mila Kunis’s scripted apology was even worse. She and Mr. Kutcher had weathered criticism for writing letters in support of their former “That ’70s Show” co-star Danny Masterson after he was convicted of rape. Facing her public, she spoke in the awkward cadence people have when they haven’t memorized their lines and don’t know where the emphasis should fall. “The letters were not written to question the legitimacy” — pause — “of the judicial system,” she said, “or the validity” — pause — “of the jury’s ruling.” For an actress, it was not a very convincing performance. Mr. Kutcher, who is her husband, was less awkward in his delivery, but his defense was no more convincing. The letters, he explained, were only “intended for the judge to read,” as if the fact that the couple operated behind the scenes made it OK.


Here are my observations about the main theme of this article:

Miller argues that many celebrity apologies fall short because they are not sincere. She says that they often lack the essential elements of a good apology: acknowledging the offense, providing an explanation, expressing remorse, and making amends. Instead, many celebrity apologies are self-serving and aimed at salvaging their public image.

Miller concludes by saying that if celebrities want their apologies to be meaningful, they need to be honest, take responsibility for their actions, and show that they are truly sorry for the harm they have caused.

I would also add that celebrity apologies can be difficult to believe because they often follow a predictable pattern. The celebrity typically issues a statement expressing their regret and apologizing to the people they have hurt. They may also offer a brief explanation for their behavior, but they often avoid taking full responsibility for their actions. And while some celebrities may make amends in some way, such as donating to charity or volunteering their time, many do not.

As a result, many people are skeptical of celebrity apologies. They see them as nothing more than a way for celebrities to save face and get back to their normal lives. This is why it is so important for celebrities to be sincere and genuine when they apologize.

Friday, September 8, 2023

He was a top church official who criticized Trump. He says Christianity is in crisis

S. Detrow, G. J. Sanchez, & S. Handel
npr.org
Originally poste 8 Aug 23

Here is an excerpt:

What's the big deal? 

According to Moore, Christianity is in crisis in the United States today.
  • Moore is now the editor-in-chief of the Christianity Today magazine and has written a new book, Losing Our Religion: An Altar Call For Evangelical America, which is his attempt at finding a path forward for the religion he loves.
  • Moore believes part of the problem is that "almost every part of American life is tribalized and factionalized," and that has extended to the church.
  • "I think if we're going to get past the blood and soil sorts of nationalism or all of the other kinds of totalizing cultural identities, it's going to require rethinking what the church is," he told NPR.
  • During his time in office, Trump embraced a Christian nationalist stance — the idea that the U.S. is a Christian country and should enforce those beliefs. In the run-up to the 2024 presidential election, Republican candidates are again vying for the influential evangelical Christian vote, demonstrating its continued influence in politics.
  • In Aug. 2022, church leaders confirmed the Department of Justice was investigating Southern Baptists following a sexual abuse crisis. In a statement, SBC leaders said: "Current leaders across the SBC have demonstrated a firm conviction to address those issues of the past and are implementing measures to ensure they are never repeated in the future."
  • In 2017, the church voted to formally "denounce and repudiate" white nationalism at its annual meeting.

What is he saying? 

Moore spoke to All Things Considered's Scott Detrow about what he thinks the path forward is for evangelicalism in America.

On why he thinks Christianity is in crisis:
It was the result of having multiple pastors tell me, essentially, the same story about quoting the Sermon on the Mount, parenthetically, in their preaching — "turn the other cheek" — [and] to have someone come up after to say, "Where did you get those liberal talking points?" And what was alarming to me is that in most of these scenarios, when the pastor would say, "I'm literally quoting Jesus Christ," the response would not be, "I apologize." The response would be, "Yes, but that doesn't work anymore. That's weak." And when we get to the point where the teachings of Jesus himself are seen as subversive to us, then we're in a crisis.

The information is here. 

Thursday, August 31, 2023

It’s not only political conservatives who worry about moral purity

K. Gray, W. Blakey, & N. DiMaggio
psychce.co
Originally posted 13 July 23

Here are two excerpts:

What does this have to do with differences in moral psychology? Well, moral psychologists have suggested that politically charged arguments about sexuality, spirituality and other subjects reflect deep differences in the moral values of liberals and conservatives. Research involving scenarios like this one has seemed to indicate that conservatives, unlike liberals, think that maintaining ‘purity’ is a moral good in itself – which for them might mean supporting what they construe as the ‘sanctity of marriage’, for example.

It may seem strange to think about ‘purity’ as a core driver of political differences. But purity, in the moral sense, is an old concept. It pops up in the Hebrew Bible a lot, in taboos around food, menstruation, and divine encounters. When Moses meets God at the Burning Bush, God says to Moses: ‘Do not come any closer, take off your sandals, for the place where you are standing is holy ground.’ Why does God tell Moses to take off his shoes? Not because his shoes magically hurt God, but because shoes are dirty, and it’s disrespectful to wear your shoes in the presence of the creator of the universe. Similarly, in ancient Greece, worshippers were often required to endure long purification rituals before looking at sacred religious idols or engaging in different spiritual rites. These ancient moral practices seem to reflect an intuition that ‘cleanliness is next to Godliness’.

In the modern era, purity has repeatedly appeared at the centre of political battlegrounds, as in clashes between US conservatives and liberals over sexual education and mores in the 1990s. It was around this time that the psychologist Jonathan Haidt began formulating a theory to help explain the moral divide. Moral foundations theory argues that liberals and conservatives are divided because they rely on distinct moral values, including purity, to different degrees.

(cut)

A harm-focused perspective on moral judgments related to ‘purity’ could help us better understand and communicate with moral opponents. We all grasp the importance of protecting ourselves and our loved ones from harm. Learning that people on the ‘other side’ of a political divide care about questions of purity because they connect these to their understanding of harm can help us empathise with different moral opinions. It is easy for a liberal to dismiss a conservative’s condemnation of dead-chicken sex when it is merely said to be ‘impure’; it is harder to be dismissive if it’s suggested that someone who makes a habit of that behaviour might end up harming people.

Explicitly grounding discussions of morality in perceptions of harm could help us all to be better citizens of a ‘small-L liberal’ society – one in which the right to swing our fists ends where others’ noses begin. If something seems disgusting, impure and immoral to you, take some time to try to articulate the harms you intuitively perceive. Talking about these potential harms may help other people understand where you are coming from. Of course, someone might not share your judgment that harm is being done. But identifying perceived harms at least puts the conversation in terms that everyone understands.


Here is my summary:

The authors define purity as "the state of being free from contamination or pollution."  They argue that people on both the left and the right care about purity because they associate it with safety and well-being.
They provide examples of how liberals and conservatives can both use purity-related language, such as "desecrate" and "toxic." They propose a new explanation of moral judgments that suggests that people care about purity when they perceive that 'impure' acts can lead to harm.

Sunday, August 27, 2023

Ontario court rules against Jordan Peterson, upholds social media training order

Canadian Broadcasting Company
Originally posted 23 August 23

An Ontario court ruled against psychologist and media personality Jordan Peterson Wednesday, and upheld a regulatory body's order that he take social media training in the wake of complaints about his controversial online posts and statements.

Last November, Peterson, a professor emeritus with the University of Toronto psychology department who is also an author and media commentator, was ordered by the College of Psychologists of Ontario to undergo a coaching program on professionalism in public statements.

That followed numerous complaints to the governing body of Ontario psychologists, of which Peterson is a member, regarding his online commentary directed at politicians, a plus-sized model, and transgender actor Elliot Page, among other issues. You can read more about those social media posts here.

The college's complaints committee concluded his controversial public statements could amount to professional misconduct and ordered Peterson to pay for a media coaching program — noting failure to comply could mean the loss of his licence to practice psychology in the province.

Peterson filed for a judicial review, arguing his political commentary is not under the college's purview.

Three Ontario Divisional Court judges unanimously dismissed Peterson's application, ruling that the college's decision falls within its mandate to regulate the profession in the public interest and does not affect his freedom of expression.

"The order is not disciplinary and does not prevent Dr. Peterson from expressing himself on controversial topics; it has a minimal impact on his right to freedom of expression," the decision written by Justice Paul Schabas reads, in part.



My take:

Peterson has argued that the order violates his right to free speech. He has also said that the complaints against him were politically motivated. However, the court ruled that the college's order was justified in order to protect the public from harm.

The case of Jordan Peterson is a reminder that psychologists, like other human beings, are not infallible. They are capable of making mistakes and of expressing harmful views. It is important to hold psychologists accountable for their actions, and to ensure that they are held to the highest ethical standards.

In addition to the steps outlined above, there are a number of other things that can be done to mitigate bias in psychology. These include:
  • Increasing diversity in the field of psychology
  • Promoting critical thinking and self-reflection among psychologists
  • Developing more specific ethical guidelines for psychologists' use of social media
  • Holding psychologists accountable for their online behavior