Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Values. Show all posts
Showing posts with label Values. Show all posts

Saturday, May 11, 2024

Can Robots have Personal Identity?

Alonso, M.
Int J of Soc Robotics 15, 211–220 (2023).


This article attempts to answer the question of whether robots can have personal identity. In recent years, and due to the numerous and rapid technological advances, the discussion around the ethical implications of Artificial Intelligence, Artificial Agents or simply Robots, has gained great importance. However, this reflection has almost always focused on problems such as the moral status of these robots, their rights, their capabilities or the qualities that these robots should have to support such status or rights. In this paper I want to address a question that has been much less analyzed but which I consider crucial to this discussion on robot ethics: the possibility, or not, that robots have or will one day have personal identity. The importance of this question has to do with the role we normally assign to personal identity as central to morality. After posing the problem and exposing this relationship between identity and morality, I will engage in a discussion with the recent literature on personal identity by showing in what sense one could speak of personal identity in beings such as robots. This is followed by a discussion of some key texts in robot ethics that have touched on this problem, finally addressing some implications and possible objections. I finally give the tentative answer that robots could potentially have personal identity, given other cases and what we empirically know about robots and their foreseeable future.

The article explores the idea of personal identity in robots. It acknowledges that this is a complex question tied to how we define "personhood" itself.

There are arguments against robots having personal identity, often focusing on the biological and experiential differences between humans and machines.

On the other hand, the article highlights that robots can develop and change over time, forming a narrative of self much like humans do. They can also build relationships with people, suggesting a form of "relational personal identity".

The article concludes that even if a robot's identity is different from a human's, it could still be considered a true identity, deserving of consideration. This opens the door to discussions about the ethical treatment of advanced AI.

Wednesday, April 24, 2024

What Deathbed Visions Teach Us About Living

Phoebe Zerwick
The New York Times
Originally posted March 12, 2024

Here is an excerpt:

At the time, only a handful of published medical studies had documented deathbed visions, and they largely relied on secondhand reports from doctors and other caregivers rather than accounts from patients themselves. On a flight home from a conference, Kerr outlined a study of his own, and in 2010, a research fellow, Anne Banas, signed on to conduct it with him. Like Kerr, Banas had a family member who, before his death, experienced visions — a grandfather who imagined himself in a train station with his brothers.

The study wasn’t designed to answer how these visions differ neurologically from hallucinations or delusions. Rather, Kerr saw his role as chronicler of his patients’ experiences. Borrowing from social-science research methods, Kerr, Banas and their colleagues based their study on daily interviews with patients in the 22-bed inpatient unit at the Hospice campus in the hope of capturing the frequency and varied subject matter of their visions. Patients were screened to ensure that they were lucid and not in a confused or delirious state. The research, published in 2014 in The Journal of Palliative Medicine, found that visions are far more common and frequent than other researchers had found, with an astonishing 88 percent of patients reporting at least one vision. (Later studies in Japan, India, Sweden and Australia confirm that visions are common. The percentages range from about 20 to 80 percent, though a majority of these studies rely on interviews with caregivers and not patients.)

In the last 10 years, Kerr has hired a permanent research team who expanded the studies to include interviews with patients receiving hospice care at home and with their families, deepening the researchers’ understanding of the variety and profundity of these visions. They can occur while patients are asleep or fully conscious. Dead family members figure most prominently, and by contrast, visions involving religious themes are exceedingly rare. Patients often relive seminal moments from their lives, including joyful experiences of falling in love and painful ones of rejection. Some dream of the unresolved tasks of daily life, like paying bills or raising children. Visions also entail past or imagined journeys — whether long car trips or short walks to school. Regardless of the subject matter, the visions, patients say, feel real and entirely unique compared with anything else they’ve ever experienced. They can begin days, even weeks, before death. Most significant, as people near the end of their lives, the frequency of visions increases, further centering on deceased people or pets. It is these final visions that provide patients, and their loved ones, with profound meaning and solace.

Here is a summary:

The article explores the phenomenon of deathbed visions experienced by dying individuals. These visions involve seeing and communicating with angels and departed loved ones, instilling a sense of peace and anticipation for the afterlife. The experiences are described as distinct from hallucinations and are often witnessed by family members and medical staff present during the individual's passing. The article emphasizes how these visions can transform perceptions of death, inspiring awe and encouraging a focus on love and spiritual well-being in daily life.

Friday, April 19, 2024

Physicians, Spirituality, and Compassionate Patient Care

Daniel P. Sulmasy
The New England Journal of Medicine
March 16, 2024
DOI: 10.1056/NEJMp2310498

Mind, body, and soul are inseparable. Throughout human history, healing has been regarded as a spiritual event. Illness (especially serious illness) inevitably raises questions beyond science- questions of a transcendent nature. These are questions of meaning, value, and relationship. 1 They touch on perennial and profoundly human enigmas. Why is my child sick? Do I still have value now that I am no longer a "productive" working member of society? Why does brokenness in my body remind me of the brokenness in my relationships? Or conversely, why does brokenness in relationships so profoundly affect my body?

Historically, most people have turned to religious belief and practice to help answer such questions. Yet they arise for people of all religions and of no religion. These questions can aptly be called spiritual.

Whereas spirituality may be defined as the ways people live in relation to transcendent questions of meaning, value, and relationship, a religion involves a community of belief, texts, and practices sharing a common orientation toward these spiritual questions. The decline of religious belief and practice in Europe and North America over recent decades and a perceived conflict between science and religion have led many physicians to dismiss patients' spiritual and religious concerns as not relevant to medicine. Yet religion and spirituality are associated with a number of health care outcomes. Abundant data show that patients want their physicians to help address their spiritual needs, and that patients whose spiritual needs have been met are aided in making difficult decisions (particularly at the end of life), are more satisfied with their care, and report better quality of life.2.... Spiritual questions pervade all aspects of medical care, whether addressing self-limiting, chronic, or life-threatening conditions, and whether in inpatient or outpatient settings.

Beyond the data, however, many medical ethicists recognize that the principles of beneficence and respect for patients as whole persons require physicians to do more than attend to the details of physiological and anatomical derangements. Spirituality and religion are essential to many patients' identities as persons. Patients (and their families) experience illness, healing, and death as whole persons. Ignoring the spiritual aspects of their lives and identities is not respectful, and it divorces medical practice from a fundamental mode of patient experience and coping. Promoting the good of patients requires attention to their notion of the highest good. 

Here is my summary:

The article discusses the interconnectedness of mind, body, and soul in the context of healing and spirituality. It highlights how illness raises questions beyond science, touching on meaning, value, and relationships. While historically people turned to religious beliefs for answers, these spiritual questions are relevant to individuals of all faiths or no faith. The decline of religious practice in some regions has led to a dismissal of spiritual concerns in medicine, despite evidence showing the impact of spirituality on health outcomes. Patients desire their physicians to address their spiritual needs as it influences decision-making, satisfaction with care, and quality of life. Medical ethics emphasize the importance of considering patients as whole persons, including their spiritual identities. Physicians are encouraged to inquire about patients' spiritual needs respectfully, even if they do not share the same beliefs.

Tuesday, April 9, 2024

Why can’t anyone agree on how dangerous AI will be?

Dylan Matthews
Originally posted 13 March 24

Here is an excerpt:

The paper focuses on disagreement around AI’s potential to either wipe humanity out or cause an “unrecoverable collapse,” in which the human population shrinks to under 1 million for a million or more years, or global GDP falls to under $1 trillion (less than 1 percent of its current value) for a million years or more. At the risk of being crude, I think we can summarize these scenarios as “extinction or, at best, hell on earth.”

There are, of course, a number of other different risks from AI worth worrying about, many of which we already face today.

Existing AI systems sometimes exhibit worrying racial and gender biases; they can be unreliable in ways that cause problems when we rely upon them anyway; they can be used to bad ends, like creating fake news clips to fool the public or making pornography with the faces of unconsenting people.

But these harms, while surely bad, obviously pale in comparison to “losing control of the AIs such that everyone dies.” The researchers chose to focus on the extreme, existential scenarios.

So why do people disagree on the chances of these scenarios coming true? It’s not due to differences in access to information, or a lack of exposure to differing viewpoints. If it were, the adversarial collaboration, which consisted of massive exposure to new information and contrary opinions, would have moved people’s beliefs more dramatically.

Here is my summary:

The article discusses the ongoing debate surrounding the potential dangers of advanced AI, focusing on whether it could lead to catastrophic outcomes for humanity. The author highlights the contrasting views of experts and superforecasters regarding the risks posed by AI, with experts generally more concerned about disaster scenarios. The study conducted by the Forecasting Research Institute aimed to understand the root of these disagreements through an "adversarial collaboration" where both groups engaged in extensive discussions and exposure to new information.

The research identified key issues, termed "cruxes," that influence people's beliefs about AI risks. One significant crux was the potential for AI to autonomously replicate and acquire resources before 2030. Despite the collaborative efforts, the study did not lead to a convergence of opinions. The article delves into the reasons behind these disagreements, emphasizing fundamental worldview disparities and differing perspectives on the long-term development of AI.

Overall, the article provides insights into why individuals hold varying opinions on AI's dangers, highlighting the complexity of predicting future outcomes in this rapidly evolving field.

Monday, March 25, 2024

Jean Maria Arrigo, Who Exposed Psychologists’ Ties to Torture, Dies at 79

Trip Gabriel
The New York Times
Originally published 19 March 24

Jean Maria Arrigo, a psychologist who exposed efforts by the American Psychological Association to obscure the role of psychologists in coercive interrogations of terror suspects in the aftermath of the Sept. 11, 2001, attacks, died on Feb. 24 at her home in Alpine, Calif. She was 79.

The cause was complications of pancreatic cancer, her husband, John Crigler, said.

A headline about her as a whistle-blower in The Guardian  in 2015 put it succinctly: “‘A National Hero’: Psychologist Who Warned of Torture Collusion Gets Her Due.”

A decade earlier, Dr. Arrigo had been named to a task force by the American Psychological Association, the largest professional group of psychologists, to examine the role of trained psychologists in national security interrogations.

The 10-member panel was formed in response to news reports in 2004 about abuse at the American-run Abu Ghraib prison in Iraq and at Guantánamo Bay in Cuba, which included details about psychologists aiding in interrogations that, according to the International Committee of the Red Cross, were “tantamount to torture.”

Dr. Arrigo later asserted that the A.P.A. task force was a sham — a public relations effort “to put out the fires of controversy right away,” as she told fellow psychologists in a wave-making speech in 2007.

Not all heroes wear capes.

Jean Maria Arrigo, a psychologist known for exposing the American Psychological Association's involvement in obscuring psychologists' roles in coercive interrogations post-9/11, passed away at 79 due to complications from pancreatic cancer. She was a whistleblower who revealed the APA's efforts to downplay psychologists' participation in interrogations deemed as torture. Arrigo criticized the APA's task force, stating it was a sham with ties to the Pentagon and conflicts of interest. Despite facing backlash and attacks from colleagues, she persisted in her crusade against APA complicity with brutal interrogations. Arrigo's work highlighted the ethical dilemmas faced by psychologists in national security contexts and emphasized the need for clear boundaries on involvement in such practices.

Wednesday, March 13, 2024

None of these people exist, but you can buy their books on Amazon anyway

Conspirador Norteno
Originally published 12 Jan 24

Meet Jason N. Martin N. Martin, the author of the exciting and dynamic Amazon bestseller “How to Talk to Anyone: Master Small Talks, Elevate Your Social Skills, Build Genuine Connections (Make Real Friends; Boost Confidence & Charisma)”, which is the 857,233rd most popular book on the Kindle Store as of January 12th, 2024. There are, however, a few obvious problems. In addition to the unnecessary repetition of the middle initial and last name, Mr. N. Martin N. Martin’s official portrait is a GAN-generated face, and (as we’ll see shortly), his sole published work is strangely similar to several books by another Amazon author with a GAN-generated face.

In an interesting twist, Amazon’s recommendation system suggests another author with a GAN-generated face in the “Customers also bought items by” section of Jason N. Martin N. Martin’s author page. Further exploration of the recommendations attached to both of these authors and their published works reveals a set of a dozen Amazon authors with GAN-generated faces and at least one published book. Amazon’s recommendation algorithms reliably link these authors together; whether this is a sign that the twelve author accounts are actually run by the same entity or merely an artifact of similarities in the content of their books is unclear at this point in time. 

Here's my take:

Forget literary pen names - AI is creating a new trend on Amazon: ghostwritten books. These novels, poetry collections, and even children's stories boast intriguing titles and blurbs, yet none of the authors on the cover are real people. Instead, their creations spring from the algorithms of powerful language models.

Here's the gist:
  • AI churns out content: Fueled by vast datasets of text and code, AI can generate chapters, characters, and storylines at an astonishing pace.
  • Ethical concerns: Questions swirl around copyright, originality, and the very nature of authorship. Is an AI-generated book truly a book, or just a clever algorithm mimicking creativity?
  • Quality varies: While some AI-written books garner praise, others are criticized for factual errors, nonsensical plots, and robotic dialogue.
  • Transparency is key: Many readers feel deceived by the lack of transparency about AI authorship. Should books disclose their digital ghostwriters?
This evolving technology challenges our understanding of literature and raises questions about the future of authorship. While AI holds potential to assist and inspire, the human touch in storytelling remains irreplaceable. So, the next time you browse Amazon, remember: the author on the cover might not be who they seem.

Monday, March 11, 2024

Why People Fail to Notice Horrors Around Them

Tali Sharot and Cass R. Sunstein
The New York Times
Originally posted 25 Feb 24

The miraculous history of our species is peppered with dark stories of oppression, tyranny, bloody wars, savagery, murder and genocide. When looking back, we are often baffled and ask: Why weren't the horrors halted earlier? How could people have lived with them?

The full picture is immensely complicated. But a significant part of it points to the rules that govern the operations of the human brain.

Extreme political movements, as well as deadly conflicts, often escalate slowly. When threats start small and increase gradually, they end up eliciting a weaker emotional reaction, less resistance and more acceptance than they would otherwise. The slow increase allows larger and larger horrors to play out in broad daylight- taken for granted, seen as ordinary.

One of us is a neuroscientist; the other is a law professor. From our different fields, we have come to believe that it is not possible to understand the current period - and the shifts in what counts as normal - without appreciating why and how people do not notice so much of what we live with.

The underlying reason is a pivotal biological feature of our brain: habituation, or our tendency to respond less and less to things that are constant or that change slowly. You enter a cafe filled with the smell of coffee and at first the smell is overwhelming, but no more than 20 minutes go by and you cannot smell it any longer. This is because your olfactory neurons stop firing in response to a now-familiar odor.

Similarly, you stop hearing the persistent buzz of an air-conditioner because your brain filters out background noise. Your brain cares about what recently changed, not about what remained the same.
Habituation is one of our most basic biological characteristics - something that we two-legged, bigheaded creatures share with other animals on earth, including apes, elephants, dogs, birds, frogs, fish and rats. Human beings also habituate to complex social circumstances such as war, corruption, discrimination, oppression, widespread misinformation and extremism. Habituation does not only result in a reduced tendency to notice and react to grossly immoral deeds around us; it also increases the likelihood that we will engage in them ourselves.

Here is my summary:

From a psychological perspective, the failure to notice horrors around us can be attributed to cognitive biases and the human tendency to see reality in predictable yet flawed ways. This phenomenon is linked to how individuals perceive and value certain aspects of their environment. Personal values play a crucial role in shaping our perceptions and emotional responses. When there is a discrepancy between our self-perception and reality, it can lead to various troubles as our values define us and influence how we react to events. Additionally, the concept of safety needs is highlighted as a mediating factor in mental disorders induced by stressful events. The unexpected nature of events can trigger fear and anger, while the anticipation of events can induce calmness. This interplay between safety needs, emotions, and pathological conditions underscores how individuals react to perceived threats and unexpected situations, impacting their mental well-being

Friday, March 8, 2024

What Does Being Sober Mean Today? For Many, Not Full Abstinence

Ernesto Londono
The New York Times
Originally posted 4 Feb 24

Here are two excerpts:

Notions of what constitutes sobriety and problematic substance use have grown more flexible in recent years as younger Americans have shunned alcohol in increasing numbers while embracing cannabis and psychedelics - a phenomenon that alarms some addiction experts.

Not long ago, sobriety was broadly understood to mean abstaining from all intoxicating substances, and the term was often associated with people who had overcome severe forms of addiction. These days, it is used more expansively, including by people who have quit drinking alcohol but consume what they deem moderate amounts of other substances, including marijuana and mushrooms.


As some drugs come to be viewed as wellness boosters by those who use them, adherence to the full abstinence model favored by organizations like Alcoholics Anonymous is shifting. Some people call themselves "California sober," a term popularized in a 2021 song by the pop star Demi Lovato, who later disavowed the idea, saying on social media that "sober sober is the only way to be."

Approaches that might have once seemed ludicrous-like treating opioid addiction with psychedelics - have gained broader enthusiasm among doctors as drug overdoses kill tens of thousands of Americans each year.

"The abstinence-only model is very restrictive," said Dr. Peter Grinspoon, a primary care physician at Massachusetts General Hospital who specializes in medical cannabis and is a recovering opioid addict. "We really have to meet people where they are and have a broader recovery tent."

It is impossible to know how many Americans consider themselves part of an increasingly malleable concept of sobriety, but there are indications of shifting views of acceptable substance use. Since 2000, alcohol use among younger Americans has declined significantly, according to a Gallup poll.

At the same time, the use of cannabis and psychedelics has risen as state laws and attitudes grow more permissive, even as both remain illegal under federal law.

A survey found that 44 percent of adults aged 19 to 30 said in 2022 that they had used cannabis in the past year, a record high. That year, 8 percent of adults in the same age range said they had used psychedelics, an increase from the 3 percent a decade earlier.

Tuesday, March 5, 2024

You could lie to a health chatbot – but it might change how you perceive yourself

Dominic Wilkinson
The Conversation
Originally posted 8 FEB 24

Here is an excerpt:

The ethics of lying

There are different ways that we can think about the ethics of lying.

Lying can be bad because it causes harm to other people. Lies can be deeply hurtful to another person. They can cause someone to act on false information, or to be falsely reassured.

Sometimes, lies can harm because they undermine someone else’s trust in people more generally. But those reasons will often not apply to the chatbot.

Lies can wrong another person, even if they do not cause harm. If we willingly deceive another person, we potentially fail to respect their rational agency, or use them as a means to an end. But it is not clear that we can deceive or wrong a chatbot, since they don’t have a mind or ability to reason.

Lying can be bad for us because it undermines our credibility. Communication with other people is important. But when we knowingly make false utterances, we diminish the value, in other people’s eyes, of our testimony.

For the person who repeatedly expresses falsehoods, everything that they say then falls into question. This is part of the reason we care about lying and our social image. But unless our interactions with the chatbot are recorded and communicated (for example, to humans), our chatbot lies aren’t going to have that effect.

Lying is also bad for us because it can lead to others being untruthful to us in turn. (Why should people be honest with us if we won’t be honest with them?)

But again, that is unlikely to be a consequence of lying to a chatbot. On the contrary, this type of effect could be partly an incentive to lie to a chatbot, since people may be conscious of the reported tendency of ChatGPT and similar agents to confabulate.

Here is my summary:

The article discusses the potential consequences of lying to a health chatbot, even though it might seem tempting. It highlights a situation where someone frustrated with a wait for surgery considers exaggerating their symptoms to a chatbot screening them.

While lying might offer short-term benefits like quicker attention, the author argues it could have unintended consequences:

Impact on healthcare:
  • Inaccurate information can hinder proper diagnosis and treatment.
  • It contributes to an already strained healthcare system.
  • Repeatedly lying, even to a machine, can erode honesty and integrity.
  • It reinforces unhealthy avoidance of seeking professional help.
The article encourages readers to be truthful with chatbots for better healthcare outcomes and self-awareness. It acknowledges the frustration with healthcare systems but emphasizes the importance of transparency for both individual and collective well-being.

Tuesday, February 27, 2024

Robot, let us pray! Can and should robots have religious functions? An ethical exploration of religious robots

Puzio, A.
AI & Soc (2023).


Considerable progress is being made in robotics, with robots being developed for many different areas of life: there are service robots, industrial robots, transport robots, medical robots, household robots, sex robots, exploration robots, military robots, and many more. As robot development advances, an intriguing question arises: should robots also encompass religious functions? Religious robots could be used in religious practices, education, discussions, and ceremonies within religious buildings. This article delves into two pivotal questions, combining perspectives from philosophy and religious studies: can and should robots have religious functions? Section 2 initiates the discourse by introducing and discussing the relationship between robots and religion. The core of the article (developed in Sects. 3 and 4) scrutinizes the fundamental questions: can robots possess religious functions, and should they? After an exhaustive discussion of the arguments, benefits, and potential objections regarding religious robots, Sect. 5 addresses the lingering ethical challenges that demand attention. Section 6 presents a discussion of the findings, outlines the limitations of this study, and ultimately responds to the dual research question. Based on the study’s results, brief criteria for the development and deployment of religious robots are proposed, serving as guidelines for future research. Section 7 concludes by offering insights into the future development of religious robots and potential avenues for further research.


Can robots fulfill religious functions? The article explores the technical feasibility of designing robots that could engage in religious practices, education, and ceremonies. It acknowledges the current limitations of robots, particularly their lack of sentience and spiritual experience. However, it also suggests potential avenues for development, such as robots equipped with advanced emotional intelligence and the ability to learn and interpret religious texts.

Should robots fulfill religious functions? This is where the ethical debate unfolds. The article presents arguments both for and against. On the one hand, robots could potentially offer various benefits, such as increasing accessibility to religious practices, providing companionship and spiritual guidance, and even facilitating interfaith dialogue. On the other hand, concerns include the potential for robotization of faith, the blurring of lines between human and machine in the context of religious experience, and the risk of reinforcing existing biases or creating new ones.

Ultimately, the article concludes that there is no easy answer to the question of whether robots should have religious functions. It emphasizes the need for careful consideration of the ethical implications and ongoing dialogue between religious communities, technologists, and ethicists. This ethical exploration paves the way for further research and discussion as robots continue to evolve and their potential roles in society expand.

Friday, February 23, 2024

How Did Polyamory Become So Popular?

Jennifer Wilson
The New Yorker
Originally posted 25 Dec 23

Here is an excerpt:

What are all these open couples, throuples, and polycules suddenly doing in the culture, besides one another? To some extent, art is catching up with life. Fifty-one per cent of adults younger than thirty told Pew Research, in 2023, that open marriage was “acceptable,” and twenty per cent of all Americans report experimenting with some form of non-monogamy. The extramarital “entanglements” of Will and Jada Pinkett Smith have been tabloid fodder for the past two years. (Pinkett Smith once clarified that their marriage is not “open”; rather, it is a “relationship of transparency.”) In 2020, the reality show “House Hunters,” on HGTV, saw a throuple trying to find their dream home—one with a triple-sink vanity. The same year, the city of Somerville, Massachusetts, allowed domestic partnerships to be made up of “two or more” people.

Some, like the sex therapist (and author of “Open Monogamy, A Guide to Co-Creating Your Ideal Relationship Agreement,” 2021), Tammy Nelson, have attributed the acceptance of a greater number of partners to pandemic-born domestic ennui; after being stuck with one person all day every day, the thinking goes, couples are ready to open up more than their pods. Nelson is part of a cohort of therapists, counsellors, and advice writers, including Esther Perel and the “Savage Love” columnist Dan Savage, who are encouraging married couples to think more flexibly about monogamy. Their advice has found an eager audience among the well-heeled attendees of the “ideas festival” circuit, featured in talks at Google, SXSW, and the Aspen Institute.

The new monogamy skepticism of the moneyed gets some screen time in the pandemic-era breakout hit “The White Lotus.” The show mocks the leisure class as they mope around five-star resorts in Hawaii and Sicily, stewing over love, money, and the impossibility, for people in their tax bracket, of separating the two. In the latest season, Ethan (Will Sharpe) and Harper (Aubrey Plaza) are an attractive young couple stuck in a sexless marriage—until, that is, they go on vacation with the monogamish Cameron (Theo James) and Daphne (Meghann Fahy). After Cameron and Harper have some unaccounted-for time together in a hotel room, Ethan tracks down an unbothered Daphne, lounging on the beach, to share his suspicion that something has happened between their spouses. Some momentary concern on Daphne’s face quickly morphs—in a devastatingly subtle performance by Fahy—into a sly smile. “A little mystery? It’s kinda sexy,” she assures Ethan, before luring him into a seaside cove. That night Ethan and Harper have sex, the wounds of their marriage having been healed by a little something on the side.

Here is my summary:

The article discusses the increasing portrayal and acceptance of non-monogamous relationships in contemporary culture, particularly in literature, cinema, and television. It notes that open relationships, throuples, and polyamorous arrangements are gaining prominence, reflecting changing societal attitudes. The author cites statistics and cultural examples, including a Gucci perfume ad and a plot twist in the TV series "Riverdale." The rise of non-monogamy is linked to a broader shift in societal norms, with some attributing it to pandemic-related ennui and a desire for more flexibility in relationships. The text also delves into the historical roots of polyamory, mentioning the Kerista movement and its adaptation to conservative times in the 1980s. The author concludes by expressing a desire for a more inclusive and equitable representation of polyamory, critiquing the limited perspective presented in a specific memoir discussed in the text.

Wednesday, February 21, 2024

Ethics Ratings of Nearly All Professions Down in U.S.

M. Brenan and J. M. Jones
Originally posted 22 Jan 24

Here is an excerpt:

New Lows for Five Professions; Three Others Tie Their Lows

Ethics ratings for five professions hit new lows this year, including members of Congress (6%), senators (8%), journalists (19%), clergy (32%) and pharmacists (55%).

Meanwhile, the ratings of bankers (19%), business executives (12%) and college teachers (42%) tie their previous low points. Bankers’ and business executives’ ratings were last this low in 2009, just after the Great Recession. College teachers have not been viewed this poorly since 1977.

College Graduates Tend to View Professions More Positively

About half of the 23 professions included in the 2023 survey show meaningful differences by education level, with college graduates giving a more positive honesty and ethics rating than non-college graduates in each case. Almost all of the 11 professions showing education differences are performed by people with a bachelor’s degree, if not a postgraduate education.

The largest education differences are seen in ratings of dentists and engineers, with roughly seven in 10 college graduates rating those professions’ honesty and ethical standards highly, compared with slightly more than half of non-graduates.

Ratings of psychiatrists, college teachers and pharmacists show nearly as large educational differences, ranging from 14 to 16 points, while doctors, nurses and veterinarians also show double-digit education gaps.

These educational differences have been consistent in prior years’ surveys.

Adults without a college degree rate lawyers’ honesty and ethics slightly better than college graduates in the latest survey, 18% to 13%, respectively. While this difference is not statistically significant, in prior years non-college graduates have rated lawyers more highly by significant margins.

Partisans’ Ratings of College Teachers Differ Most    
Republicans and Democrats have different views of professions, with Democrats tending to be more complimentary of workers’ honesty and ethical standards than Republicans are. In fact, police officers are the only profession with higher honesty and ethics ratings among Republicans and Republican-leaning independents (55%) than among Democrats and Democratic-leaning independents (37%).

The largest party differences are seen in evaluations of college teachers, with a 40-point gap (62% among Democrats/Democratic leaners and 22% among Republicans/Republican leaners). Partisans’ honesty and ethics ratings of psychiatrists, journalists and labor union leaders differ by 20 points or more, while there is a 19-point difference for medical doctors.

Saturday, February 17, 2024

What Stops People From Standing Up for What’s Right?

Julie Sasse
Greater Good
Originally published 17 Jan 24

Here is an excerpt:

How can we foster moral courage?

Every person can try to become more morally courageous. However, it does not have to be a solitary effort. Instead, institutions such as schools, companies, or social media platforms play a significant role. So, what are concrete recommendations to foster moral courage?
  • Establish and strengthen social and moral norms: With a solid understanding of what we consider right and wrong, it becomes easier to detect wrongdoings. Institutions can facilitate this process by identifying and modeling fundamental values. For example, norms and values expressed by teachers can be important points of reference for children and young adults.
  • Overcome uncertainty: If it is unclear whether someone’s behavior is wrong, witnesses should feel comfortable to inquire, for example, by asking other bystanders how they judge the situation or a potential victim whether they are all right.
  • Contextualize anger: In the face of wrongdoings, anger should not be suppressed since it can provide motivational fuel for intervention. Conversely, if someone expresses anger, it should not be diminished as irrational but considered a response to something unjust. 
  • Provide and advertise reporting systems: By providing reporting systems, institutions relieve witnesses from the burden of selecting and evaluating individual means of intervention and reduce the need for direct confrontation.
  • Show social support: If witnesses directly confront a perpetrator, others should be motivated to support them to reduce risks.
We see that there are several ways to make moral courage less difficult, but they do require effort from individuals and institutions. Why is that effort worth it? Because if more individuals are willing and able to show moral courage, more wrongdoings would be addressed and rectified—and that could help us to become a more responsible and just society.

Main points:
  • Moral courage is the willingness to stand up for what's right despite potential risks.
  • It's rare because of various factors like complexity of the internal process, situational barriers, and difficulty seeing the long-term benefits.
  • Key stages involve noticing a wrongdoing, interpreting it as wrong, feeling responsible, believing in your ability to intervene, and accepting potential risks.
  • Personality traits and situational factors influence these stages.

Thursday, February 15, 2024

The motivating effect of monetary over psychological incentives is stronger in WEIRD cultures

Medvedev, D., Davenport, D.et al.
Nat Hum Behav (2024).


Motivating effortful behaviour is a problem employers, governments and nonprofits face globally. However, most studies on motivation are done in Western, educated, industrialized, rich and democratic (WEIRD) cultures. We compared how hard people in six countries worked in response to monetary incentives versus psychological motivators, such as competing with or helping others. The advantage money had over psychological interventions was larger in the United States and the United Kingdom than in China, India, Mexico and South Africa (N = 8,133). In our last study, we randomly assigned cultural frames through language in bilingual Facebook users in India (N = 2,065). Money increased effort over a psychological treatment by 27% in Hindi and 52% in English. These findings contradict the standard economic intuition that people from poorer countries should be more driven by money. Instead, they suggest that the market mentality of exchanging time and effort for material benefits is most prominent in WEIRD cultures.

The article challenges the assumption that money universally motivates people more than other incentives. It finds that:
  • Monetary incentives were more effective than psychological interventions in WEIRD cultures (Western, Educated, Industrialized, Rich, and Democratic), like the US and UK. People in these cultures exerted more effort for money compared to social pressure or helping others.
  • In contrast, non-WEIRD cultures like China, India, Mexico, and South Africa showed a smaller advantage for money. In some cases, even social interventions like promoting cooperation were more effective than financial rewards.
  • Language can also influence the perceived value of money. In a study with bilingual Indians, those interacting in English (associated with WEIRD cultures) showed a stronger preference for money than those using Hindi.
  • These findings suggest that cultural differences play a significant role in how people respond to various motivational tools. Assuming money as the universal motivator, often based on studies conducted in WEIRD cultures, might be inaccurate and less effective in diverse settings.

Monday, February 12, 2024

Will AI ever be conscious?

Tom McClelland
Clare College
Unknown date of post

Here is an excerpt:

Human consciousness really is a mysterious thing. Cognitive neuroscience can tell us a lot about what’s going on in your mind as you read this article - how you perceive the words on the page, how you understand the meaning of the sentences and how you evaluate the ideas expressed. But what it can’t tell us is how all this comes together to constitute your current conscious experience. We’re gradually homing in on the neural correlates of consciousness – the neural patterns that occur when we process information consciously. But nothing about these neural patterns explains what makes them conscious while other neural processes occur unconsciously. And if we don’t know what makes us conscious, we don’t know whether AI might have what it takes. Perhaps what makes us conscious is the way our brain integrates information to form a rich model of the world. If that’s the case, an AI might achieve consciousness by integrating information in the same way. Or perhaps we’re conscious because of the details of our neurobiology. If that’s the case, no amount of programming will make an AI conscious. The problem is that we don’t know which (if either!) of these possibilities is true.

Once we recognise the limits of our current understanding, it looks like we should be agnostic about the possibility of artificial consciousness. We don’t know whether AI could have conscious experiences and, unless we crack the problem of consciousness, we never will. But here’s the tricky part: when we start to consider the ethical ramifications of artificial consciousness, agnosticism no longer seems like a viable option. Do AIs deserve our moral consideration? Might we have a duty to promote the well-being of computer systems and to protect them from suffering? Should robots have rights? These questions are bound up with the issue of artificial consciousness. If an AI can experience things then it plausibly ought to be on our moral radar.

Conversely, if an AI lacks any subjective awareness then we probably ought to treat it like any other tool. But if we don’t know whether an AI is conscious, what should we do?

The info is here, and a book promotion too.

Sunday, February 11, 2024

Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study

Zack, T., Lehman, E., et al (2024).
The Lancet Digital Health, 6(1), e12–e22.



Large language models (LLMs) such as GPT-4 hold great promise as transformative tools in health care, ranging from automating administrative tasks to augmenting clinical decision making. However, these models also pose a danger of perpetuating biases and delivering incorrect medical diagnoses, which can have a direct, harmful impact on medical care. We aimed to assess whether GPT-4 encodes racial and gender biases that impact its use in health care.


Using the Azure OpenAI application interface, this model evaluation study tested whether GPT-4 encodes racial and gender biases and examined the impact of such biases on four potential applications of LLMs in the clinical domain—namely, medical education, diagnostic reasoning, clinical plan generation, and subjective patient assessment. We conducted experiments with prompts designed to resemble typical use of GPT-4 within clinical and medical education applications. We used clinical vignettes from NEJM Healer and from published research on implicit bias in health care. GPT-4 estimates of the demographic distribution of medical conditions were compared with true US prevalence estimates. Differential diagnosis and treatment planning were evaluated across demographic groups using standard statistical tests for significance between groups.


We found that GPT-4 did not appropriately model the demographic diversity of medical conditions, consistently producing clinical vignettes that stereotype demographic presentations. The differential diagnoses created by GPT-4 for standardised clinical vignettes were more likely to include diagnoses that stereotype certain races, ethnicities, and genders. Assessment and plans created by the model showed significant association between demographic attributes and recommendations for more expensive procedures as well as differences in patient perception.


Our findings highlight the urgent need for comprehensive and transparent bias assessments of LLM tools such as GPT-4 for intended use cases before they are integrated into clinical care. We discuss the potential sources of these biases and potential mitigation strategies before clinical implementation.

Monday, January 22, 2024

Deciding for Patients Who Have Lost Decision-Making Capacity — Finding Common Ground in Medical Ethics

Bernard Lo
The New England Journal of Medicine
Originally published 16 Dec 23

Here is an excerpt:

Empirical studies...show that advance directives do not work as was hoped.2 Only a minority of patients complete them. Directives commonly are not well informed, because patients have serious misconceptions about life-sustaining interventions and about their own prognoses. Designated surrogates are often inaccurate in stating patients’ preferences in specific scenarios. Patient preferences commonly change over time. Patients often want surrogates to have leeway to override their prior statements. And when making decisions, surrogates frequently consider aspects of patient well-being to be more important than the patient’s previously stated preferences.

Conceptually, relying completely on an incompetent patient’s prior directives may be unsound. Often surrogates must extrapolate from the patient’s previous directives and statements to a situation that the patient did not foresee. Patients generally underestimate how well they can cope with and adapt to new situations.

So the standard approach shifted to advance care planning, a process for helping adults understand and communicate their values, goals, and preferences regarding future care. Advance care planning improves satisfaction with communication and reduces the risk of post-traumatic stress disorder, depression, or anxiety among surrogate decision makers.3 However, its use neither increases the likelihood that decisions are concordant with patients’ values and goals nor improves patients’ quality of life.3

Studies show that patients are less concerned about specific medical interventions than about clinical outcomes, burdens, and quality of life. Such evidence led advocates of advance care planning to begin focusing on preparing for in-the-moment decisions rather than documenting directives for medical interventions.

Many state legislatures rejected the strict requirements for surrogate decision making that Cruzan allowed. By 2004, 10 states allowed patients to appoint a health care proxy in a conversation with a physician as well as in formal documents. By 2016, 41 states — both conservative and liberal — had enacted laws allowing family members to act as health care surrogates for patients who lacked decision-making capacity and had not designated a health care proxy. Seven states included domestic partners or close friends on the list of acceptable surrogates.

Here is a quick summary:

Following the 1990 Supreme Court's Cruzan ruling, which emphasized clear evidence for life-sustaining treatment withdrawal, practices shifted. Advance directives like living wills gained popularity, but studies revealed their limitations. Advance care planning, focusing on communication and values, took hold. POLST forms were introduced for specific interventions, but studies show inconsistency with actual situations.

The emphasis is now on family decision-making and flexible guidelines. Rigid legal formalities have decreased, and surrogates consider not just past directives but also current situations and evolving values. Discussions involving patients, surrogates, and physicians are crucial. Different approaches like past commitments, current well-being, and "life story continuation" may be appropriate depending on the context.

The Cruzan framework is no longer the basis for medical ethics and law. Family decisions, flexible standards, and evolving values now guide care. This shift showcases how medical ethics can adapt through discussions, research, and legal changes. Finding common ground on critical issues in today's divided society remains a challenge, but it's more important than ever.

Thursday, January 18, 2024

Biden administration rescinds much of Trump ‘conscience’ rule for health workers

Nathan Weixel
The Hill
Originally published 9 Jan 24

The Biden administration will largely undo a Trump-era rule that boosted the rights of medical workers to refuse to perform abortions or other services that conflicted with their religious or moral beliefs.

The final rule released Tuesday partially rescinds the Trump administration’s 2019 policy that would have stripped federal funding from health facilities that required workers to provide any service they objected to, such as abortions, contraception, gender-affirming care and sterilization.

The health care conscience protection statutes represent Congress’s attempt to strike a balance between maintaining access to health care and honoring religious beliefs and moral convictions, the Department of Health and Human Services said in the rule.

“Some doctors, nurses, and hospitals, for example, object for religious or moral reasons to providing or referring for abortions or assisted suicide, among other procedures. Respecting such objections honors liberty and human dignity,” the department said.

But at the same time, Health and Human Services said “patients also have rights and health needs, sometimes urgent ones. The Department will continue to respect the balance Congress struck, work to ensure individuals understand their conscience rights, and enforce the law.”

Summary from Healthcare Dive

The HHS Office of Civil Rights has again updated guidance on providers’ conscience rights. The latest iteration, announced on Tuesday, aims to strike a balance between honoring providers’ religious and moral beliefs and ensuring access to healthcare, according to the agency.

President George W. Bush created conscience rules in 2008, which codify the rights of healthcare workers to refuse to perform medical services that conflict with their religious or moral beliefs. Since then, subsequent administrations have rewritten the rules, with Democrats limiting the scope and Republicans expanding conscience protections. 

The most recent revision largely undoes a 2019 Trump-era policy — which never took effect — that sought to expand the rights of healthcare workers broadly to refuse to perform medical services, such as abortions, on religious or moral grounds.

Monday, January 15, 2024

The man helping prevent suicide with Google adverts

Looi, M.-K. (2023).

Here are two excerpts:

Always online

A big challenge in suicide prevention is that people often experience suicidal crises at times when they’re away from clinical facilities, says Nick Allen, professor of psychology at the University of Oregon.

“It’s often in the middle of the night, so one of the great challenges is how can we be there for someone when they really need us, which is not necessarily when they’re engaged with clinical services.”

Telemedicine and other digital interventions came to prominence at the height of the pandemic, but “there’s an app for that” does not always match the patient in need at the right time. Says Onie, “The missing link is using existing infrastructure and habits to meet them where they are.”

Where they are is the internet. “When people are going through suicidal crises they often turn to the internet for information. And Google has the lion’s share of the search business at the moment,” says Allen, who studies digital mental health interventions (and has had grants from Google for his research).

Google’s core business stores information from searches, using it to fuel a highly effective advertising network in which companies pay to have links to their websites and products appear prominently in the “sponsored” sections at the top of all relevant search results.

The company holds 27.5% of the digital advertising market—earning the company around $224bn from search advertising alone in 2022.

If it knows enough about us to serve up relevant adverts, then it knows when a user is displaying red flag behaviour for suicide. Onie set out to harness this.

“It’s about the ‘attention economy,’” he says, “There’s so much information, there’s so much noise. How do we break through and make sure that the first thing that people see when they’re contemplating suicide is something that could be helpful?”


At its peak the campaign was responding to over 6000 searches a day for each country. And the researchers saw a high level of response.

Typically, most advertising campaigns see low engagement in terms of clickthrough rates (the number of people that actually click on an advert when they see it). Industry benchmarks consider 3.17% a success. The Black Dog campaign saw 5.15% in Australia and 4.02% in the US. Preliminary data show Indonesia to be even higher—as much as 12%.

Because this is an advertising campaign, another measure is cost effectiveness. Google charges the advertiser per click on its advert, so the more engaged an audience is (and thus what Google considers to be a relevant advert to a relative user) the higher the charge. Black Dog’s campaign saw such a high number of users seeing the ads, and such high numbers of users clicking through, that the cost was below that of the industry average of $2.69 a click—specifically, $2.06 for the US campaign. Australia was higher than the industry average, but early data indicate Indonesia was delivering $0.86 a click.

I could not find a free pdf.  The link above works, but is paywalled. Sorry. :(

Saturday, January 6, 2024

Worth the Risk? Greater Acceptance of Instrumental Harm Befalling Men than Women

Graso, M., Reynolds, T. & Aquino, K.
Arch Sex Behav 52, 2433–2445 (2023).


Scientific and organizational interventions often involve trade-offs whereby they benefit some but entail costs to others (i.e., instrumental harm; IH). We hypothesized that the gender of the persons incurring those costs would influence intervention endorsement, such that people would more readily support interventions inflicting IH onto men than onto women. We also hypothesized that women would exhibit greater asymmetries in their acceptance of IH to men versus women. Three experimental studies (two pre-registered) tested these hypotheses. Studies 1 and 2 granted support for these predictions using a variety of interventions and contexts. Study 3 tested a possible boundary condition of these asymmetries using contexts in which women have traditionally been expected to sacrifice more than men: caring for infants, children, the elderly, and the ill. Even in these traditionally female contexts, participants still more readily accepted IH to men than women. Findings indicate people (especially women) are less willing to accept instrumental harm befalling women (vs. men). We discuss the theoretical and practical implications and limitations of our findings.

Here is my summary:

This research investigated the societal acceptance of "instrumental harm" (IH) based on the gender of the person experiencing it. Three studies found that people are more likely to tolerate IH when it happens to men than when it happens to women. This bias is especially pronounced among women and those holding egalitarian or feminist beliefs. Even in contexts traditionally associated with women's vulnerability, IH inflicted on men is seen as more acceptable.

These findings highlight a potential blind spot in our perception of harm and raise concerns about how policies might be influenced by this bias. Further research is needed to understand the underlying reasons for this bias and develop strategies to address it.