Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Ethics. Show all posts
Showing posts with label Ethics. Show all posts

Sunday, April 21, 2024

An Expert Who Has Testified in Foster Care Cases Across Colorado Admits Her Evaluations Are Unscientific

Eli Hager
Originally posted 18 March 24

Diane Baird had spent four decades evaluating the relationships of poor families with their children. But last May, in a downtown Denver conference room, with lawyers surrounding her and a court reporter transcribing, she was the one under the microscope.

Baird, a social worker and professional expert witness, has routinely advocated in juvenile court cases across Colorado that foster children be adopted by or remain in the custody of their foster parents rather than being reunified with their typically lower-income birth parents or other family members.

In the conference room, Baird was questioned for nine hours by a lawyer representing a birth family in a case out of rural Huerfano County, according to a recently released transcript of the deposition obtained by ProPublica.

Was Baird’s method for evaluating these foster and birth families empirically tested? No, Baird answered: Her method is unpublished and unstandardized, and has remained “pretty much unchanged” since the 1980s. It doesn’t have those “standard validity and reliability things,” she admitted. “It’s not a scientific instrument.”

Who hired and was paying her in the case that she was being deposed about? The foster parents, she answered. They wanted to adopt, she said, and had heard about her from other foster parents.

Had she considered or was she even aware of the cultural background of the birth family and child whom she was recommending permanently separating? (The case involved a baby girl of multiracial heritage.) Baird answered that babies have “never possessed” a cultural identity, and therefore are “not losing anything,” at their age, by being adopted. Although when such children grow up, she acknowledged, they might say to their now-adoptive parents, “Oh, I didn’t know we were related to the, you know, Pima tribe in northern California, or whatever the circumstances are.”

The Pima tribe is located in the Phoenix metropolitan area.


Here is my summary:

The article discusses Diane Baird, an expert who has testified in foster care cases across Colorado, admitting that her evaluations are unscientific. Baird, who has spent four decades evaluating the relationships of poor families with their children, labeled her method for assessing families as the "Kempe Protocol." This revelation raises concerns about the validity of her evaluations in foster care cases and highlights the need for more rigorous and scientific approaches in such critical assessments.

Saturday, April 20, 2024

The Dark Side of AI in Mental Health

Michael DePeau-Wilson
MedPage Today
Originally posted 11 April 24

With the rise in patient-facing psychiatric chatbots powered by artificial intelligence (AI), the potential need for patient mental health data could drive a boom in cash-for-data scams, according to mental health experts.

A recent example of controversial data collection appeared on Craigslist when a company called Therapy For All allegedly posted an advertisement offering money for recording therapy sessions without any additional information about how the recordings would be used.

The company's advertisement and website had already been taken down by the time it was highlighted by a mental health influencer on TikTokopens in a new tab or window. However, archived screenshots of the websiteopens in a new tab or window revealed the company was seeking recorded therapy sessions "to better understand the format, topics, and treatment associated with modern mental healthcare."

Their stated goal was "to ultimately provide mental healthcare to more people at a lower cost," according to the defunct website.

In service of that goal, the company was offering $50 for each recording of a therapy session of at least 45 minutes with clear audio of both the patient and their therapist. The company requested that the patients withhold their names to keep the recordings anonymous.


Here is a summary:

The article highlights several ethical concerns surrounding the use of AI in mental health care:

The lack of patient consent and privacy protections when companies collect sensitive mental health data to train AI models. For example, the nonprofit Koko used OpenAI's GPT-3 to experiment with online mental health support without proper consent protocols.

The issue of companies sharing patient data without authorization, as seen with the Crisis Text Line platform, which led to significant backlash from users.

The clinical risks of relying solely on AI-powered chatbots for mental health therapy, rather than having human clinicians involved. Experts warn this could be "irresponsible and ultimately dangerous" for patients dealing with complex, serious conditions.

The potential for unethical "cash-for-data" schemes, such as the Therapy For All company that sought to obtain recorded therapy sessions without proper consent, in order to train AI models.

Thursday, April 18, 2024

An artificial womb could build a bridge to health for premature babies

Rob Stein
npr.org
Originally posted 12 April 24

Here is an excerpt:

Scientific progress prompts ethical concerns

But the possibility of an artificial womb is also raising many questions. When might it be safe to try an artificial womb for a human? Which preterm babies would be the right candidates? What should they be called? Fetuses? Babies?

"It matters in terms of how we assign moral status to individuals," says Mercurio, the Yale bioethicist. "How much their interests — how much their welfare — should count. And what one can and cannot do for them or to them."

But Mercurio is optimistic those issues can be resolved, and the potential promise of the technology clearly warrants pursuing it.

The Food and Drug Administration held a workshop in September 2023 to discuss the latest scientific efforts to create an artificial womb, the ethical issues the technology raises, and what questions would have to be answered before allowing an artificial womb to be tested for humans.

"I am absolutely pro the technology because I think it has great potential to save babies," says Vardit Ravitsky, president and CEO of The Hastings Center, a bioethics think tank.

But there are particular issues raised by the current political and legal environment.

"My concern is that pregnant people will be forced to allow fetuses to be taken out of their bodies and put into an artificial womb rather than being allowed to terminate their pregnancies — basically, a new way of taking away abortion rights," Ravitsky says.

She also wonders: What if it becomes possible to use artificial wombs to gestate fetuses for an entire pregnancy, making natural pregnancy unnecessary?


Here are some general ethical concerns:

The use of artificial wombs raises several ethical and moral concerns. One key issue is the potential for artificial wombs to be used to extend the limits of fetal viability, which could complicate debates around abortion access and the moral status of the fetus. There are also concerns that artificial wombs could enable "designer babies" through genetic engineering and lead to the commodification of human reproduction. Additionally, some argue that developing a baby outside of a woman's uterus is inherently "unnatural" and could undermine the maternal-fetal bond.

 However, proponents contend that artificial wombs could save the lives of premature infants and provide options for women with high-risk pregnancies.  

 Ultimately, the ethics of artificial womb technology will require careful consideration of principles like autonomy, beneficence, and justice as this technology continues to advance.

Monday, April 15, 2024

On the Ethics of Chatbots in Psychotherapy.

Benosman, M. (2024, January 7).
PsyArXiv Preprints
https://doi.org/10.31234/osf.io/mdq8v

Introduction:

In recent years, the integration of chatbots in mental health care has emerged as a groundbreaking development. These artificial intelligence (AI)-driven tools offer new possibilities for therapy and support, particularly in areas where mental health services are scarce or stigmatized. However, the use of chatbots in this sensitive domain raises significant ethical concerns that must be carefully considered. This essay explores the ethical implications of employing chatbots in mental health, focusing on issues of non-maleficence, beneficence, explicability, and care. Our main ethical question is: should we trust chatbots with our mental health and wellbeing?

Indeed, the recent pandemic has made mental health an urgent global problem. This fact, together with the widespread shortage in qualified human therapists, makes the proposal of chatbot therapists a timely, and perhaps, viable alternative. However, we need to be cautious about hasty implementations of such alternative. For instance, recent news has reported grave incidents involving chatbots-human interactions. For example, (Walker, 2023) reports the death of an eco-anxious man who committed suicide following a prolonged interaction with a chatbot named ELIZA, which encouraged him to put an end to his life to save the planet. Another individual was caught while executing a plan to assassinate the Queen of England, after a chatbot encouraged him to do so (Singleton, Gerken, & McMahon, 2023).

These are only a few recent examples that demonstrate the potential maleficence effect of chatbots on-fragile-individuals. Thus, to be ready to safely deploy such technology, in the context of mental health care, we need to carefully study its potential impact on patients from an ethics standpoint.


Here is my summary:

The article analyzes the ethical considerations around the use of chatbots as mental health therapists, from the perspectives of different stakeholders - bioethicists, therapists, and engineers. It examines four main ethical values:

Non-maleficence: Ensuring chatbots do not cause harm, either accidentally or deliberately. There is agreement that chatbots need rigorous evaluation and regulatory oversight like other medical devices before clinical deployment.

Beneficence: Ensuring chatbots are effective at providing mental health support. There is a need for evidence-based validation of their efficacy, while also considering broader goals like improving quality of life.

Explicability: The need for transparency and accountability around how chatbot algorithms work, so patients can understand the limitations of the technology.

Care: The inability of chatbots to truly empathize, which is a crucial aspect of effective human-based psychotherapy. This raises concerns about preserving patient autonomy and the risk of manipulation.

Overall, the different stakeholders largely agree on the importance of these ethical values, despite coming from different backgrounds. The text notes a surprising level of alignment, even between the more technical engineering perspective and the more humanistic therapist and bioethicist viewpoints. The key challenge seems to be ensuring chatbots can meet the high bar of empathy and care required for effective mental health therapy.

Thursday, April 4, 2024

Ready or not, AI chatbots are here to help with Gen Z’s mental health struggles

Matthew Perrone
AP.com
Originally posted 23 March 24

Here is an excerpt:

Earkick is one of hundreds of free apps that are being pitched to address a crisis in mental health among teens and young adults. Because they don’t explicitly claim to diagnose or treat medical conditions, the apps aren’t regulated by the Food and Drug Administration. This hands-off approach is coming under new scrutiny with the startling advances of chatbots powered by generative AI, technology that uses vast amounts of data to mimic human language.

The industry argument is simple: Chatbots are free, available 24/7 and don’t come with the stigma that keeps some people away from therapy.

But there’s limited data that they actually improve mental health. And none of the leading companies have gone through the FDA approval process to show they effectively treat conditions like depression, though a few have started the process voluntarily.

“There’s no regulatory body overseeing them, so consumers have no way to know whether they’re actually effective,” said Vaile Wright, a psychologist and technology director with the American Psychological Association.

Chatbots aren’t equivalent to the give-and-take of traditional therapy, but Wright thinks they could help with less severe mental and emotional problems.

Earkick’s website states that the app does not “provide any form of medical care, medical opinion, diagnosis or treatment.”

Some health lawyers say such disclaimers aren’t enough.


Here is my summary:

AI chatbots can provide personalized, 24/7 mental health support and guidance to users through convenient mobile apps. They use natural language processing and machine learning to simulate human conversation and tailor responses to individual needs.

 This can be especially beneficial for those who face barriers to accessing traditional in-person therapy, such as cost, location, or stigma.

Research has shown that AI chatbots can be effective in reducing the severity of mental health issues like anxiety, depression, and stress for diverse populations.  They can deliver evidence-based interventions like cognitive behavioral therapy and promote positive psychology.  Some well-known examples include Wysa, Woebot, Replika, Youper, and Tess.

However, there are also ethical concerns around the use of AI chatbots for mental health. There are risks of providing inadequate or even harmful support if the chatbot cannot fully understand the user's needs or respond empathetically. Algorithmic bias in the training data could also lead to discriminatory advice. It's crucial that users understand the limitations of the therapeutic relationship with an AI chatbot versus a human therapist.

Overall, AI chatbots have significant potential to expand access to mental health support, but must be developed and deployed responsibly with strong safeguards to protect user wellbeing. Continued research and oversight will be needed to ensure these tools are used effectively and ethically.

Monday, March 25, 2024

Jean Maria Arrigo, Who Exposed Psychologists’ Ties to Torture, Dies at 79

Trip Gabriel
The New York Times
Originally published 19 March 24

Jean Maria Arrigo, a psychologist who exposed efforts by the American Psychological Association to obscure the role of psychologists in coercive interrogations of terror suspects in the aftermath of the Sept. 11, 2001, attacks, died on Feb. 24 at her home in Alpine, Calif. She was 79.

The cause was complications of pancreatic cancer, her husband, John Crigler, said.

A headline about her as a whistle-blower in The Guardian  in 2015 put it succinctly: “‘A National Hero’: Psychologist Who Warned of Torture Collusion Gets Her Due.”

A decade earlier, Dr. Arrigo had been named to a task force by the American Psychological Association, the largest professional group of psychologists, to examine the role of trained psychologists in national security interrogations.

The 10-member panel was formed in response to news reports in 2004 about abuse at the American-run Abu Ghraib prison in Iraq and at Guantánamo Bay in Cuba, which included details about psychologists aiding in interrogations that, according to the International Committee of the Red Cross, were “tantamount to torture.”

Dr. Arrigo later asserted that the A.P.A. task force was a sham — a public relations effort “to put out the fires of controversy right away,” as she told fellow psychologists in a wave-making speech in 2007.


Not all heroes wear capes.

Jean Maria Arrigo, a psychologist known for exposing the American Psychological Association's involvement in obscuring psychologists' roles in coercive interrogations post-9/11, passed away at 79 due to complications from pancreatic cancer. She was a whistleblower who revealed the APA's efforts to downplay psychologists' participation in interrogations deemed as torture. Arrigo criticized the APA's task force, stating it was a sham with ties to the Pentagon and conflicts of interest. Despite facing backlash and attacks from colleagues, she persisted in her crusade against APA complicity with brutal interrogations. Arrigo's work highlighted the ethical dilemmas faced by psychologists in national security contexts and emphasized the need for clear boundaries on involvement in such practices.

Thursday, March 21, 2024

AI-synthesized faces are indistinguishable from real faces and more trustworthy

Nightingale, S. J., & Farid, H. (2022).
PNAS of the USA, 119(8).

Abstract

Artificial intelligence (AI)–synthesized text, audio, image, and video are being weaponized for the purposes of nonconsensual intimate imagery, financial fraud, and disinformation campaigns. Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces.

Here is part of the Discussion section

Synthetically generated faces are not just highly photorealistic, they are nearly indistinguishable from real faces and are judged more trustworthy. This hyperphotorealism is consistent with recent findings. These two studies did not contain the same diversity of race and gender as ours, nor did they match the real and synthetic faces as we did to minimize the chance of inadvertent cues. While it is less surprising that White male faces are highly realistic—because these faces dominate the neural network training—we find that the realism of synthetic faces extends across race and gender. Perhaps most interestingly, we find that synthetically generated faces are more trustworthy than real faces. This may be because synthesized faces tend to look more like average faces which themselves are deemed more trustworthy. Regardless of the underlying reason, synthetically generated faces have emerged on the other side of the uncanny valley. This should be considered a success for the fields of computer graphics and vision. At the same time, easy access (https://thispersondoesnotexist.com) to such high-quality fake imagery has led and will continue to lead to various problems, including more convincing online fake profiles and—as synthetic audio and video generation continues to improve—problems of nonconsensual intimate imagery, fraud, and disinformation campaigns, with serious implications for individuals, societies, and democracies.

We, therefore, encourage those developing these technologies to consider whether the associated risks are greater than their benefits. If so, then we discourage the development of technology simply because it is possible. If not, then we encourage the parallel development of reasonable safeguards to help mitigate the inevitable harms from the resulting synthetic media. Safeguards could include, for example, incorporating robust watermarks into the image and video synthesis networks that would provide a downstream mechanism for reliable identification. Because it is the democratization of access to this powerful technology that poses the most significant threat, we also encourage reconsideration of the often laissez-faire approach to the public and unrestricted releasing of code for anyone to incorporate into any application.

Here are some important points:

This research raises concerns about the potential for misuse of AI-generated faces in areas like deepfakes and disinformation campaigns.

It also opens up interesting questions about how we perceive trust and authenticity in our increasingly digital world.

Sunday, March 17, 2024

The Argument Over a Long-Standing Autism Intervention

Jessica Winter
The New Yorker
Originally posted 12 Feb 24

Here are excerpts:

A.B.A. is the only autism intervention that is approved by insurers and Medicaid in all fifty states. The practice is widely recommended for autistic kids who exhibit dangerous behaviors, such as self-injury or aggression toward others, or who need to acquire basic skills, such as dressing themselves or going to the bathroom. The mother of a boy with severe autism in New York City told me that her son’s current goals in A.B.A. include tolerating the shower for incrementally longer intervals, redirecting the urge to pull on other people’s hair, and using a speech tablet to say no. Another kid might be working on more complex language skills by drilling with flash cards or honing his ability to focus on academic work. Often, A.B.A. targets autistic traits that may be socially stigmatizing but are harmless unto themselves, such as fidgeting, avoiding eye contact, or stereotypic behaviors commonly known as stimming—rocking, hand-flapping, and so forth.

(cut)

In recent years, A.B.A. has come under increasingly vehement criticism from members of the neurodiversity movement, who believe that it cruelly pathologizes autistic behavior. They say that its rewards for compliance are dehumanizing; some compare A.B.A. to conversion therapy. Social-media posts condemning the practice often carry the hashtag #ABAIsAbuse. The message that A.B.A. sends is that “your instinctual way of being is incorrect,” Zoe Gross, the director of advocacy at the nonprofit Autistic Self Advocacy Network, told me. “The goals of A.B.A. therapy—from its inception, but still through today—tend to focus on teaching autistic people to behave like non-autistic people.” But others say this criticism obscures the good work that A.B.A. can do. Alicia Allgood, a board-certified behavior analyst who co-runs an A.B.A. agency in New York City, and who is herself autistic, told me, “The autistic community is up in arms. There is a very vocal part of the autistic population that is saying that A.B.A. is harmful or aversive or has potentially caused trauma.”

(cut)

In recent years, private equity has taken a voracious interest in A.B.A. services, partly because they are perceived as inexpensive. Private-equity firms have consolidated many small clinics into larger chains, where providers are often saddled with unrealistic billing quotas and cut-and-paste treatment plans. Last year, the Center for Economic and Policy Research published a startling report on the subject, which included an account of how Blackstone effectively bankrupted a successful A.B.A. provider and shut down more than a hundred of its treatment sites. Private-equity-owned A.B.A. chains have been accused of fraudulent billing and wage theft; message boards for A.B.A. providers overflow with horror stories about low pay, churn, and burnout. High rates of turnover are acutely damaging to a specialty that relies on familiarity between provider and client. “The idea that we could just franchise A.B.A. providers and anyone could do the work—that was misinformed,” Singer, of the Autism Science Foundation, said.

Thursday, March 14, 2024

A way forward for responsibility in the age of AI

Gogoshin, D.L.
Inquiry (2024)

Abstract

Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what are the goods attached to them? The debate concerning ‘machine morality’ is often hinged on whether artificial agents are or could ever be morally responsible, and it is generally taken for granted (following Matthias 2004) that if they cannot, they pose a threat to the moral responsibility system and associated goods. In this paper, I challenge this assumption by asking what the goods of this system, if any, are, and what happens to them in the face of artificially intelligent agents. I will argue that they neither introduce new problems for the moral responsibility system nor do they threaten what we really (ought to) care about. I conclude the paper with a proposal for how to secure this objective.


Here is my summary:

While AI may not possess true moral agency, it's crucial to consider how the development and use of AI can be made more responsible. The author challenges the assumption that AI's lack of moral responsibility inherently creates problems for our current system of ethics. Instead, they focus on the "goods" this system provides, such as deserving blame or praise, and how these can be upheld even with AI's presence. To achieve this, the author proposes several steps, including:
  1. Shifting the focus from AI's moral agency to the agency of those who design, build, and use it. This means holding these individuals accountable for the societal impacts of AI.
  2. Developing clear ethical guidelines for AI development and use. These guidelines should be comprehensive, addressing issues like fairness, transparency, and accountability.
  3. Creating robust oversight mechanisms. This could involve independent bodies that monitor AI development and use, and have the power to intervene when necessary.
  4. Promoting public understanding of AI. This will help people make informed decisions about how AI is used in their lives and hold developers and users accountable.

Wednesday, March 13, 2024

None of these people exist, but you can buy their books on Amazon anyway

Conspirador Norteno
Substack.com
Originally published 12 Jan 24

Meet Jason N. Martin N. Martin, the author of the exciting and dynamic Amazon bestseller “How to Talk to Anyone: Master Small Talks, Elevate Your Social Skills, Build Genuine Connections (Make Real Friends; Boost Confidence & Charisma)”, which is the 857,233rd most popular book on the Kindle Store as of January 12th, 2024. There are, however, a few obvious problems. In addition to the unnecessary repetition of the middle initial and last name, Mr. N. Martin N. Martin’s official portrait is a GAN-generated face, and (as we’ll see shortly), his sole published work is strangely similar to several books by another Amazon author with a GAN-generated face.

In an interesting twist, Amazon’s recommendation system suggests another author with a GAN-generated face in the “Customers also bought items by” section of Jason N. Martin N. Martin’s author page. Further exploration of the recommendations attached to both of these authors and their published works reveals a set of a dozen Amazon authors with GAN-generated faces and at least one published book. Amazon’s recommendation algorithms reliably link these authors together; whether this is a sign that the twelve author accounts are actually run by the same entity or merely an artifact of similarities in the content of their books is unclear at this point in time. 


Here's my take:

Forget literary pen names - AI is creating a new trend on Amazon: ghostwritten books. These novels, poetry collections, and even children's stories boast intriguing titles and blurbs, yet none of the authors on the cover are real people. Instead, their creations spring from the algorithms of powerful language models.

Here's the gist:
  • AI churns out content: Fueled by vast datasets of text and code, AI can generate chapters, characters, and storylines at an astonishing pace.
  • Ethical concerns: Questions swirl around copyright, originality, and the very nature of authorship. Is an AI-generated book truly a book, or just a clever algorithm mimicking creativity?
  • Quality varies: While some AI-written books garner praise, others are criticized for factual errors, nonsensical plots, and robotic dialogue.
  • Transparency is key: Many readers feel deceived by the lack of transparency about AI authorship. Should books disclose their digital ghostwriters?
This evolving technology challenges our understanding of literature and raises questions about the future of authorship. While AI holds potential to assist and inspire, the human touch in storytelling remains irreplaceable. So, the next time you browse Amazon, remember: the author on the cover might not be who they seem.

Tuesday, March 12, 2024

Discerning Saints: Moralization of Intrinsic Motivation and Selective Prosociality at Work

Kwon, M., Cunningham, J. L., & 
Jachimowicz, J. M. (2023).
Academy of Management Journal, 66(6),
1625–1650.

Abstract

Intrinsic motivation has received widespread attention as a predictor of positive work outcomes, including employees’ prosocial behavior. We offer a more nuanced view by proposing that intrinsic motivation does not uniformly increase prosocial behavior toward all others. Specifically, we argue that employees with higher intrinsic motivation are more likely to value intrinsic motivation and associate it with having higher morality (i.e., they moralize it). When employees moralize intrinsic motivation, they perceive others with higher intrinsic motivation as being more moral and thus engage in more prosocial behavior toward those others, and judge others who are less intrinsically motivated as less moral and thereby engage in less prosocial behaviors toward them. We provide empirical support for our theoretical model across a large-scale, team-level field study in a Latin American financial institution (n = 784, k = 185) and a set of three online studies, including a preregistered experiment (n = 245, 243, and 1,245), where we develop a measure of the moralization of intrinsic motivation and provide both causal and mediating evidence. This research complicates our understanding of intrinsic motivation by revealing how its moralization may at times dim the positive light of intrinsic motivation itself.

The article is paywalled.  Here are some thoughts:

This study focuses on how intrinsically motivated employees (those who enjoy their work) might act differently towards other employees depending on their own level of intrinsic motivation. The key points are:

Main finding: Employees with high intrinsic motivation tend to associate higher morality with others who also have high intrinsic motivation. This leads them to offer more help and support to those similar colleagues, while judging and helping less to those with lower intrinsic motivation.

Theoretical framework: The concept of "moralization of intrinsic motivation" (MOIM) explains this behavior. Essentially, intrinsic motivation becomes linked to moral judgment, influencing who is seen as "good" and deserving of help.

Implications:
  • For theory: This research adds a new dimension to understanding intrinsic motivation, highlighting the potential for judgment and selective behavior.
  • For practice: Managers and leaders should be aware of the unintended consequences of promoting intrinsic motivation, as it might create bias and division among employees.
  • For employees: Those lacking intrinsic motivation might face disadvantages due to judgment from colleagues. They could try job crafting or seeking alternative support strategies.
Overall, the study reveals a nuanced perspective on intrinsic motivation, acknowledging its positive aspects while recognizing its potential to create inequality and ethical concerns.

Thursday, March 7, 2024

Canada Postpones Plan to Allow Euthanasia for Mentally Ill

Craig McCulloh
Voice of America News
Originally posted 8 Feb 24

The Canadian government is delaying access to medically assisted death for people with mental illness.

Those suffering from mental illness were supposed to be able to access Medical Assistance in Dying — also known as MAID — starting March 17. The recent announcement by the government of Canadian Prime Minister Justin Trudeau was the second delay after original legislation authorizing the practice passed in 2021.

The delay came in response to a recommendation by a majority of the members of a committee made up of senators and members of Parliament.

One of the most high-profile proponents of MAID is British Columbia-based lawyer Chris Considine. In the mid-1990s, he represented Sue Rodriguez, who was dying from amyotrophic lateral sclerosis, commonly known as ALS.

Their bid for approval of a medically assisted death was rejected at the time by the Supreme Court of Canada. But a law passed in 2016 legalized euthanasia for individuals with terminal conditions. From then until 2022, more than 45,000 people chose to die.


Summary:

Canada originally planned to expand its Medical Assistance in Dying (MAiD) program to include individuals with mental illnesses in March 2024.
  • This plan has been postponed until 2027 due to concerns about the healthcare system's readiness and potential ethical issues.
  • The original legislation passed in 2021, but concerns about safeguards and mental health support led to delays.
  • This issue is complex and ethically charged, with advocates arguing for individual autonomy and opponents raising concerns about coercion and vulnerability.
I would be concerned about the following issues:
  • Vulnerability: Mental illness can impair judgement, raising concerns about informed consent and potential coercion.
  • Safeguards: Concerns exist about insufficient safeguards to prevent abuse or exploitation.
  • Mental health access: Limited access to adequate mental health treatment could contribute to undue pressure towards MAiD.
  • Social inequalities: Concerns exist about disproportionate access to MAiD based on socioeconomic background.

Tuesday, March 5, 2024

You could lie to a health chatbot – but it might change how you perceive yourself

Dominic Wilkinson
The Conversation
Originally posted 8 FEB 24

Here is an excerpt:

The ethics of lying

There are different ways that we can think about the ethics of lying.

Lying can be bad because it causes harm to other people. Lies can be deeply hurtful to another person. They can cause someone to act on false information, or to be falsely reassured.

Sometimes, lies can harm because they undermine someone else’s trust in people more generally. But those reasons will often not apply to the chatbot.

Lies can wrong another person, even if they do not cause harm. If we willingly deceive another person, we potentially fail to respect their rational agency, or use them as a means to an end. But it is not clear that we can deceive or wrong a chatbot, since they don’t have a mind or ability to reason.

Lying can be bad for us because it undermines our credibility. Communication with other people is important. But when we knowingly make false utterances, we diminish the value, in other people’s eyes, of our testimony.

For the person who repeatedly expresses falsehoods, everything that they say then falls into question. This is part of the reason we care about lying and our social image. But unless our interactions with the chatbot are recorded and communicated (for example, to humans), our chatbot lies aren’t going to have that effect.

Lying is also bad for us because it can lead to others being untruthful to us in turn. (Why should people be honest with us if we won’t be honest with them?)

But again, that is unlikely to be a consequence of lying to a chatbot. On the contrary, this type of effect could be partly an incentive to lie to a chatbot, since people may be conscious of the reported tendency of ChatGPT and similar agents to confabulate.


Here is my summary:

The article discusses the potential consequences of lying to a health chatbot, even though it might seem tempting. It highlights a situation where someone frustrated with a wait for surgery considers exaggerating their symptoms to a chatbot screening them.

While lying might offer short-term benefits like quicker attention, the author argues it could have unintended consequences:

Impact on healthcare:
  • Inaccurate information can hinder proper diagnosis and treatment.
  • It contributes to an already strained healthcare system.
Self-perception:
  • Repeatedly lying, even to a machine, can erode honesty and integrity.
  • It reinforces unhealthy avoidance of seeking professional help.
The article encourages readers to be truthful with chatbots for better healthcare outcomes and self-awareness. It acknowledges the frustration with healthcare systems but emphasizes the importance of transparency for both individual and collective well-being.

Sunday, March 3, 2024

Is Dan Ariely Telling the Truth?

Tom Bartlett
The Chronicle of Higher Ed
Originally posted 18 Feb 24

Here is an excerpt:

In August 2021, the blog Data Colada published a post titled “Evidence of Fraud in an Influential Field Experiment About Dishonesty.” Data Colada is run by three researchers — Uri Simonsohn, Leif Nelson, and Joe Simmons — and it serves as a freelance watchdog for the field of behavioral science, which has historically done a poor job of policing itself. The influential field experiment in question was described in a 2012 paper, published in the Proceedings of the National Academy of Sciences, by Ariely and four co-authors. In the study, customers of an insurance company were asked to report how many miles they had driven over a period of time, an answer that might affect their premiums. One set of customers signed an honesty pledge at the top of the form, and another signed at the bottom. The study found that those who signed at the top reported higher mileage totals, suggesting that they were more honest. The authors wrote that a “simple change of the signature location could lead to significant improvements in compliance.” The study was classic Ariely: a slight tweak to a system that yields real-world results.

But did it actually work? In 2020, an attempted replication of the effect found that it did not. In fact, multiple attempts to replicate the 2012 finding all failed (though Ariely points to evidence in a recent, unpublished paper, on which he is a co-author, indicating that the effect might be real). The authors of the attempted replication posted the original data from the 2012 study, which was then scrutinized by a group of anonymous researchers who found that the data, or some of it anyway, had clearly been faked. They passed the data along to the Data Colada team. There were multiple red flags. For instance, the number of miles customers said they’d driven was unrealistically uniform. About the same number of people drove 40,000 miles as drove 500 miles. No actual sampling would look like that — but randomly generated data would. Two different fonts were used in the file, apparently because whoever fudged the numbers wasn’t being careful.

In short, there is no doubt that the data were faked. The only question is, who did it?


This article discusses an investigation into the research conduct of Dr. Dan Ariely, a well-known behavioral economist at Duke University. The investigation, prompted by concerns about potential data fabrication, concluded that while no evidence of fabricated data was found, Ariely did commit research misconduct by failing to adequately vet findings and maintain proper records.

The article highlights several specific issues identified by the investigation, including inconsistencies in data and a lack of supporting documentation for key findings. It also mentions that Ariely made inaccurate statements about his personal history, such as misrepresenting his age at the time of a childhood accident.

While Ariely maintains that he did not intentionally fabricate data and attributes the errors to negligence and a lack of awareness, the investigation's findings have damaged his reputation and raised questions about the integrity of his research. The article concludes by leaving the reader to ponder whether Ariely's transgressions can be forgiven or if they represent a deeper pattern of dishonesty.

It's important to note that the article presents one perspective on a complex issue and doesn't offer definitive answers. Further research and analysis are necessary to form a complete understanding of the situation.

Tuesday, February 27, 2024

Robot, let us pray! Can and should robots have religious functions? An ethical exploration of religious robots

Puzio, A.
AI & Soc (2023).
https://doi.org/10.1007/s00146-023-01812-z

Abstract

Considerable progress is being made in robotics, with robots being developed for many different areas of life: there are service robots, industrial robots, transport robots, medical robots, household robots, sex robots, exploration robots, military robots, and many more. As robot development advances, an intriguing question arises: should robots also encompass religious functions? Religious robots could be used in religious practices, education, discussions, and ceremonies within religious buildings. This article delves into two pivotal questions, combining perspectives from philosophy and religious studies: can and should robots have religious functions? Section 2 initiates the discourse by introducing and discussing the relationship between robots and religion. The core of the article (developed in Sects. 3 and 4) scrutinizes the fundamental questions: can robots possess religious functions, and should they? After an exhaustive discussion of the arguments, benefits, and potential objections regarding religious robots, Sect. 5 addresses the lingering ethical challenges that demand attention. Section 6 presents a discussion of the findings, outlines the limitations of this study, and ultimately responds to the dual research question. Based on the study’s results, brief criteria for the development and deployment of religious robots are proposed, serving as guidelines for future research. Section 7 concludes by offering insights into the future development of religious robots and potential avenues for further research.


Summary

Can robots fulfill religious functions? The article explores the technical feasibility of designing robots that could engage in religious practices, education, and ceremonies. It acknowledges the current limitations of robots, particularly their lack of sentience and spiritual experience. However, it also suggests potential avenues for development, such as robots equipped with advanced emotional intelligence and the ability to learn and interpret religious texts.

Should robots fulfill religious functions? This is where the ethical debate unfolds. The article presents arguments both for and against. On the one hand, robots could potentially offer various benefits, such as increasing accessibility to religious practices, providing companionship and spiritual guidance, and even facilitating interfaith dialogue. On the other hand, concerns include the potential for robotization of faith, the blurring of lines between human and machine in the context of religious experience, and the risk of reinforcing existing biases or creating new ones.

Ultimately, the article concludes that there is no easy answer to the question of whether robots should have religious functions. It emphasizes the need for careful consideration of the ethical implications and ongoing dialogue between religious communities, technologists, and ethicists. This ethical exploration paves the way for further research and discussion as robots continue to evolve and their potential roles in society expand.

Wednesday, February 21, 2024

Ethics Ratings of Nearly All Professions Down in U.S.

M. Brenan and J. M. Jones
gallup.com
Originally posted 22 Jan 24

Here is an excerpt:

New Lows for Five Professions; Three Others Tie Their Lows

Ethics ratings for five professions hit new lows this year, including members of Congress (6%), senators (8%), journalists (19%), clergy (32%) and pharmacists (55%).

Meanwhile, the ratings of bankers (19%), business executives (12%) and college teachers (42%) tie their previous low points. Bankers’ and business executives’ ratings were last this low in 2009, just after the Great Recession. College teachers have not been viewed this poorly since 1977.

College Graduates Tend to View Professions More Positively

About half of the 23 professions included in the 2023 survey show meaningful differences by education level, with college graduates giving a more positive honesty and ethics rating than non-college graduates in each case. Almost all of the 11 professions showing education differences are performed by people with a bachelor’s degree, if not a postgraduate education.

The largest education differences are seen in ratings of dentists and engineers, with roughly seven in 10 college graduates rating those professions’ honesty and ethical standards highly, compared with slightly more than half of non-graduates.

Ratings of psychiatrists, college teachers and pharmacists show nearly as large educational differences, ranging from 14 to 16 points, while doctors, nurses and veterinarians also show double-digit education gaps.

These educational differences have been consistent in prior years’ surveys.

Adults without a college degree rate lawyers’ honesty and ethics slightly better than college graduates in the latest survey, 18% to 13%, respectively. While this difference is not statistically significant, in prior years non-college graduates have rated lawyers more highly by significant margins.

Partisans’ Ratings of College Teachers Differ Most    
                
Republicans and Democrats have different views of professions, with Democrats tending to be more complimentary of workers’ honesty and ethical standards than Republicans are. In fact, police officers are the only profession with higher honesty and ethics ratings among Republicans and Republican-leaning independents (55%) than among Democrats and Democratic-leaning independents (37%).

The largest party differences are seen in evaluations of college teachers, with a 40-point gap (62% among Democrats/Democratic leaners and 22% among Republicans/Republican leaners). Partisans’ honesty and ethics ratings of psychiatrists, journalists and labor union leaders differ by 20 points or more, while there is a 19-point difference for medical doctors.

Wednesday, February 14, 2024

Responding to Medical Errors—Implementing the Modern Ethical Paradigm

T. H. Gallagher &  A. Kachalia
The New England Journal of Medicine
January 13, 2024
DOI: 10.1056/NEJMp2309554

Here are some excerpts:

Traditionally, recommendations regarding responding to medical errors focused mostly on whether to disclose mistakes to patients. Over time, empirical research, ethical analyses, and stakeholder engagement began to inform expectations - which are now embodied in communication and resolution programs (CRPS) — for how health care professionals and organizations should respond not just to errors but any time patients have been harmed by medical care (adverse events). CRPs require several steps: quickly detecting adverse events, communicating openly and empathetically with patients and families about the event, apologizing and taking responsibility for errors, analyzing events and redesigning processes to prevent recurrences, supporting patients and clinicians, and proactively working with patients toward reconciliation. In this modern ethical paradigm, any time harm occurs, clinicians and health care organizations are accountable for minimizing suffering and promoting learning. However, implementing this ethical paradigm is challenging, especially when the harm was due to an error.

Historically, the individual physician was deemed the "captain of the ship," solely accountable for patient outcomes. Bioethical analyses emphasized the fiduciary nature of the doctor-patient relationship (i.e., doctors are in a position of greater knowledge and power) and noted that telling patients...about harmful errors supported patient autonomy and facilitated informed consent for future decisions. However, under U.S. tort law, physicians and organizations can be held accountable and financially liable for damages when they make negligent errors. As a result, ethical recommendations for openness were drowned out by fears of lawsuits and payouts, leading to a "deny and defend" response. Several factors initiated a paradigm shift. In the early 2000s, reports from the Institute of Medicine transformed the way the health care profession conceptualized patient safety.1 The imperative became creating cultures of safety that encouraged everyone to report errors to enable learning and foster more reliable systems. Transparency assumed greater importance, since you cannot fix problems you don't know about. The ethical imperative for openness was further supported when rising consumerism made it clear that patients expected responses to harm to include disclosure of what happened, an apology, reconciliation, and organizational learning.

(cut)

CRP Model for Responding to Harmful Medical Errors

Research has been critical to CRP expansion. Several studies have demonstrated that CRPs can enjoy physician support and operate without increasing liability risk. Nonetheless, research also shows that physicians remain concerned about their ability to communicate with patients and families after a harmful error and worry about liability risks including being sued, having their malpractice premiums raised, and having the event reported to the National Practitioner Data Bank (NPDB).5 Successful CRPS typically deploy a formal team, prioritize clinician and leadership buy-in, and engage liability insurers in their efforts. The table details the steps associated with the CRP model, the ethical rationale for each step, barriers to implementation, and strategies for overcoming them.

The growth of CRPs also reflects collaboration among diverse stakeholder groups, including patient advocates, health care organizations, plaintiff and defense attorneys, liability insurers, state medical associations, and legislators. Sustained stakeholder engagement that respects the diverse perspectives of each group has been vital, given the often opposing views these groups have espoused.
As CRPS proliferate, it will be important to address a few key challenges and open questions in implementing this ethical paradigm.


The article provides a number of recommendations for how healthcare providers can implement these principles. These include:
  • Developing open and honest communication with patients.
  • Providing timely and accurate information about the error.
  • Offering apologies and expressing empathy for the harm that has been caused.
  • Working with patients to develop a plan to address the consequences of the error.
  • Conducting a thorough investigation of the error to identify the root causes and prevent future errors.
  • Sharing the results of the investigation with patients and the public.

Sunday, February 11, 2024

Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study

Zack, T., Lehman, E., et al (2024).
The Lancet Digital Health, 6(1), e12–e22.

Summary

Background

Large language models (LLMs) such as GPT-4 hold great promise as transformative tools in health care, ranging from automating administrative tasks to augmenting clinical decision making. However, these models also pose a danger of perpetuating biases and delivering incorrect medical diagnoses, which can have a direct, harmful impact on medical care. We aimed to assess whether GPT-4 encodes racial and gender biases that impact its use in health care.

Methods

Using the Azure OpenAI application interface, this model evaluation study tested whether GPT-4 encodes racial and gender biases and examined the impact of such biases on four potential applications of LLMs in the clinical domain—namely, medical education, diagnostic reasoning, clinical plan generation, and subjective patient assessment. We conducted experiments with prompts designed to resemble typical use of GPT-4 within clinical and medical education applications. We used clinical vignettes from NEJM Healer and from published research on implicit bias in health care. GPT-4 estimates of the demographic distribution of medical conditions were compared with true US prevalence estimates. Differential diagnosis and treatment planning were evaluated across demographic groups using standard statistical tests for significance between groups.

Findings

We found that GPT-4 did not appropriately model the demographic diversity of medical conditions, consistently producing clinical vignettes that stereotype demographic presentations. The differential diagnoses created by GPT-4 for standardised clinical vignettes were more likely to include diagnoses that stereotype certain races, ethnicities, and genders. Assessment and plans created by the model showed significant association between demographic attributes and recommendations for more expensive procedures as well as differences in patient perception.

Interpretation

Our findings highlight the urgent need for comprehensive and transparent bias assessments of LLM tools such as GPT-4 for intended use cases before they are integrated into clinical care. We discuss the potential sources of these biases and potential mitigation strategies before clinical implementation.

Monday, February 5, 2024

Should Patients Be Allowed to Die From Anorexia? Is a 'Palliative' Approach to Mental Illness Ethical?

Katie Engelhart
New York Times Magazine
Originally posted 3 Jan 24

Here is an excerpt:

He came to think that he had been impelled by a kind of professional hubris — a hubris particular to psychiatrists, who never seemed to acknowledge that some patients just could not get better. That psychiatry had actual therapeutic limits. Yager wanted to find a different path. In academic journals, he came across a small body of literature, mostly theoretical, on the idea of palliative psychiatry. The approach offered a way for him to be with patients without trying to make them better: to not abandon the people who couldn’t seem to be fixed. “I developed this phrase of ‘compassionate witnessing,’” he told me. “That’s what priests did. That’s what physicians did 150 years ago when they didn’t have any tools. They would just sit at the bedside and be with somebody.”

Yager believed that a certain kind of patient — maybe 1 or 2 percent of them — would benefit from entirely letting go of standard recovery-oriented care. Yager would want to know that such a patient had insight into her condition and her options. He would want to know that she had been in treatment in the past, not just once but several times. Still, he would not require her to have tried anything and everything before he brought her into palliative care. Even a very mentally ill person, he thought, was allowed to have ideas about what she could and could not tolerate.

If the patient had a comorbidity, like depression, Yager would want to know that it was being treated. Maybe, for some patients, treating their depression would be enough to let them keep fighting. But he wouldn’t insist that a person be depression-free before she left standard treatment. Not all depression can be cured, and many people are depressed and make decisions for themselves every day. It would be Yager’s job to tease out whether what the patient said she wanted was what she authentically desired, or was instead an expression of pathological despair. Or more: a suicidal yearning. Or something different: a cry for help. That was always part of the job: to root around for authenticity in the morass of a disease.


Some thoughts:

The question of whether patients with anorexia nervosa should be allowed to die from their illness or receive palliative care is a complex and emotionally charged one, lacking easy answers. It delves into the profound depths of autonomy, mental health, and the very meaning of life itself.

The Anorexic's Dilemma:

Anorexia nervosa is a severe eating disorder characterized by a relentless pursuit of thinness and an intense fear of weight gain. It often manifests in severe food restriction, excessive exercise, and distorted body image. This relentless control, however, comes at a devastating cost. Organ failure, malnutrition, and even death can be the tragic consequences of the disease's progression.

Palliative Care: Comfort Not Cure:

Palliative care focuses on symptom management and improving quality of life for individuals with life-threatening illnesses. In the context of anorexia, it would involve addressing physical discomfort, emotional distress, and spiritual concerns, but without actively aiming for weight gain or cure. This raises numerous ethical and practical questions:
  • Respecting Autonomy: Does respecting a patient's autonomy mean allowing them to choose a path that may lead to death, even if their decision is influenced by a mental illness?
  • The Line Between Choice and Coercion: How do we differentiate between a genuine desire for death and succumbing to the distorted thinking patterns of anorexia?
  • Futility vs. Hope: When is treatment considered futile, and when should hope for recovery, however slim, be prioritized?
Finding the Middle Ground:

There's no one-size-fits-all answer to this intricate dilemma. Each case demands individual consideration, taking into account the patient's mental capacity, level of understanding, and potential for recovery. Open communication, involving the patient, their family, and a multidisciplinary team of healthcare professionals, is crucial in navigating this sensitive terrain.

Potential Approaches:
  • Enhanced Supportive Care: Focusing on improving the patient's quality of life through pain management, emotional support, and addressing underlying psychological issues.
  • Conditional Palliative Care: Providing palliative care while continuing to offer and encourage life-sustaining treatment, with the possibility of transitioning back to active recovery if the patient shows signs of willingness.
  • Advance Directives: Encouraging patients to discuss their wishes and preferences beforehand, allowing for informed decision-making when faced with difficult choices.

Friday, February 2, 2024

Young people turning to AI therapist bots

Joe Tidy
BBC.com
Originally posted 4 Jan 24

Here is an excerpt:

Sam has been so surprised by the success of the bot that he is working on a post-graduate research project about the emerging trend of AI therapy and why it appeals to young people. Character.ai is dominated by users aged 16 to 30.

"So many people who've messaged me say they access it when their thoughts get hard, like at 2am when they can't really talk to any friends or a real therapist,"
Sam also guesses that the text format is one with which young people are most comfortable.
"Talking by text is potentially less daunting than picking up the phone or having a face-to-face conversation," he theorises.

Theresa Plewman is a professional psychotherapist and has tried out Psychologist. She says she is not surprised this type of therapy is popular with younger generations, but questions its effectiveness.

"The bot has a lot to say and quickly makes assumptions, like giving me advice about depression when I said I was feeling sad. That's not how a human would respond," she said.

Theresa says the bot fails to gather all the information a human would and is not a competent therapist. But she says its immediate and spontaneous nature might be useful to people who need help.
She says the number of people using the bot is worrying and could point to high levels of mental ill health and a lack of public resources.


Here are some important points-

Reasons for appeal:
  • Cost: Traditional therapy's expense and limited availability drive some towards bots, seen as cheaper and readily accessible.
  • Stigma: Stigma associated with mental health might make bots a less intimidating first step compared to human therapists.
  • Technology familiarity: Young people, comfortable with technology, find text-based interaction with bots familiar and less daunting than face-to-face sessions.
Concerns and considerations:
  • Bias: Bots trained on potentially biased data might offer inaccurate or harmful advice, reinforcing existing prejudices.
  • Qualifications: Lack of professional mental health credentials and oversight raises concerns about the quality of support provided.
  • Limitations: Bots aren't replacements for human therapists. Complex issues or severe cases require professional intervention.