Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Wednesday, February 28, 2024

Scientists are on the verge of a male birth-control pill. Will men take it?

Jill Filipovic
The Guardian
Originally posted 18 Dec 23

Here is an excerpt:

The overwhelming share of responsibility for preventing pregnancy has always fallen on women. Throughout human history, women have gone to great lengths to prevent pregnancies they didn’t want, and end those they couldn’t prevent. Safe and reliable contraceptive methods are, in the context of how long women have sought to interrupt conception, still incredibly new. Measured by the lifespan of anyone reading this article, though, they are well established, and have for many decades been a normal part of life for millions of women around the world.

To some degree, and if only for obvious biological reasons, it makes sense that pregnancy prevention has historically fallen on women. But it also, as they say, takes two to tango – and only one of the partners has been doing all the work. Luckily, things are changing: thanks to generations of women who have gained unprecedented freedoms and planned their families using highly effective contraception methods, and thanks to men who have shifted their own gender expectations and become more involved partners and fathers, women and men have moved closer to equality than ever.

Among politically progressive couples especially, it’s now standard to expect that a male partner will do his fair share of the household management and childrearing (whether he actually does is a separate question, but the expectation is there). What men generally cannot do, though, is carry pregnancies and birth babies.


Here are some themes worthy of discussion:

Shifting responsibility: The potential availability of a reliable male contraceptive marks a significant departure from the historical norm where the burden of pregnancy prevention was primarily borne by women. This shift raises thought-provoking questions that delve into various aspects of societal dynamics.

Gender equality: A crucial consideration is whether men will willingly share responsibility for contraception on an equal footing, or whether societal norms will continue to exert pressure on women to take the lead in this regard.

Reproductive autonomy: The advent of accessible male contraception prompts contemplation on whether it will empower women to exert greater control over their reproductive choices, shaping the landscape of family planning.

Informed consent: An important facet of this shift involves how men will be informed about potential side effects and risks associated with the male contraceptive, particularly in comparison to existing female contraceptives.

Accessibility and equity: Concerns emerge regarding equitable access to the male contraceptive, particularly for marginalized communities. Questions arise about whether affordable and culturally appropriate access will be universally available, regardless of socioeconomic status or geographic location.

Coercion: There is a potential concern that the availability of a male contraceptive might be exploited to coerce women into sexual activity without their full and informed consent.

Psychological and social impact: The introduction of a male contraceptive brings with it potential psychological and social consequences that may not be immediately apparent.

Changes in sexual behavior: The availability of a male contraceptive may influence sexual practices and attitudes towards sex, prompting a reevaluation of societal norms.

Impact on relationships: The shift in responsibility for contraception could potentially cause tension or conflict in existing relationships as couples navigate the evolving dynamics.

Masculinity and stigma: The use of a male contraceptive may challenge traditional notions of masculinity, possibly leading to social stigma that individuals using the contraceptive may face.

Tuesday, February 27, 2024

Robot, let us pray! Can and should robots have religious functions? An ethical exploration of religious robots

Puzio, A.
AI & Soc (2023).
https://doi.org/10.1007/s00146-023-01812-z

Abstract

Considerable progress is being made in robotics, with robots being developed for many different areas of life: there are service robots, industrial robots, transport robots, medical robots, household robots, sex robots, exploration robots, military robots, and many more. As robot development advances, an intriguing question arises: should robots also encompass religious functions? Religious robots could be used in religious practices, education, discussions, and ceremonies within religious buildings. This article delves into two pivotal questions, combining perspectives from philosophy and religious studies: can and should robots have religious functions? Section 2 initiates the discourse by introducing and discussing the relationship between robots and religion. The core of the article (developed in Sects. 3 and 4) scrutinizes the fundamental questions: can robots possess religious functions, and should they? After an exhaustive discussion of the arguments, benefits, and potential objections regarding religious robots, Sect. 5 addresses the lingering ethical challenges that demand attention. Section 6 presents a discussion of the findings, outlines the limitations of this study, and ultimately responds to the dual research question. Based on the study’s results, brief criteria for the development and deployment of religious robots are proposed, serving as guidelines for future research. Section 7 concludes by offering insights into the future development of religious robots and potential avenues for further research.


Summary

Can robots fulfill religious functions? The article explores the technical feasibility of designing robots that could engage in religious practices, education, and ceremonies. It acknowledges the current limitations of robots, particularly their lack of sentience and spiritual experience. However, it also suggests potential avenues for development, such as robots equipped with advanced emotional intelligence and the ability to learn and interpret religious texts.

Should robots fulfill religious functions? This is where the ethical debate unfolds. The article presents arguments both for and against. On the one hand, robots could potentially offer various benefits, such as increasing accessibility to religious practices, providing companionship and spiritual guidance, and even facilitating interfaith dialogue. On the other hand, concerns include the potential for robotization of faith, the blurring of lines between human and machine in the context of religious experience, and the risk of reinforcing existing biases or creating new ones.

Ultimately, the article concludes that there is no easy answer to the question of whether robots should have religious functions. It emphasizes the need for careful consideration of the ethical implications and ongoing dialogue between religious communities, technologists, and ethicists. This ethical exploration paves the way for further research and discussion as robots continue to evolve and their potential roles in society expand.

Sunday, February 11, 2024

Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study

Zack, T., Lehman, E., et al (2024).
The Lancet Digital Health, 6(1), e12–e22.

Summary

Background

Large language models (LLMs) such as GPT-4 hold great promise as transformative tools in health care, ranging from automating administrative tasks to augmenting clinical decision making. However, these models also pose a danger of perpetuating biases and delivering incorrect medical diagnoses, which can have a direct, harmful impact on medical care. We aimed to assess whether GPT-4 encodes racial and gender biases that impact its use in health care.

Methods

Using the Azure OpenAI application interface, this model evaluation study tested whether GPT-4 encodes racial and gender biases and examined the impact of such biases on four potential applications of LLMs in the clinical domain—namely, medical education, diagnostic reasoning, clinical plan generation, and subjective patient assessment. We conducted experiments with prompts designed to resemble typical use of GPT-4 within clinical and medical education applications. We used clinical vignettes from NEJM Healer and from published research on implicit bias in health care. GPT-4 estimates of the demographic distribution of medical conditions were compared with true US prevalence estimates. Differential diagnosis and treatment planning were evaluated across demographic groups using standard statistical tests for significance between groups.

Findings

We found that GPT-4 did not appropriately model the demographic diversity of medical conditions, consistently producing clinical vignettes that stereotype demographic presentations. The differential diagnoses created by GPT-4 for standardised clinical vignettes were more likely to include diagnoses that stereotype certain races, ethnicities, and genders. Assessment and plans created by the model showed significant association between demographic attributes and recommendations for more expensive procedures as well as differences in patient perception.

Interpretation

Our findings highlight the urgent need for comprehensive and transparent bias assessments of LLM tools such as GPT-4 for intended use cases before they are integrated into clinical care. We discuss the potential sources of these biases and potential mitigation strategies before clinical implementation.

Sunday, January 28, 2024

Americans are lonely and it’s killing them. How the US can combat this new epidemic.

Adrianna Rodriguez
USA Today
Originally posted 24 Dec 23

America has a new epidemic. It can’t be treated using traditional therapies even though it has debilitating and even deadly consequences.

The problem seeping in at the corners of our communities is loneliness and U.S. Surgeon General Dr. Vivek Murthy is hoping to generate awareness and offer remedies before it claims more lives.

“Most of us probably think of loneliness as just a bad feeling,” he told USA TODAY. “It turns out that loneliness has far greater implications for our health when we struggle with a sense of social disconnection, being lonely or isolated.”

Loneliness is detrimental to mental and physical health, experts say, leading to an increased risk of heart disease, dementia, stroke and premature death. As researchers track record levels of self-reported loneliness, public health leaders are banding together to develop a public health framework to address the epidemic.

“The world is becoming lonelier and there’s some very, very worrisome consequences,” said Dr. Jeremy Nobel, founder of The Foundation for Art and Healing, a nonprofit that addresses public health concerns through creative expression, which launched an initiative called Project Unlonely.

“It won’t just make you miserable, but loneliness will kill you," he said. "And that’s why it’s a crisis."


Key points:
  • Loneliness Crisis: America faces a growing epidemic of loneliness impacting mental and physical health, leading to increased risks of heart disease, dementia, stroke, and premature death.
  • Diverse and Widespread: Loneliness affects various demographics, from young adults to older populations, and isn't limited by social media interaction.
  • Health Risks: The Surgeon General reports loneliness raises risk of premature death by 26%, equivalent to smoking 15 cigarettes daily. Heart disease and stroke risks also increase significantly.
  • Causes: Numerous factors contribute, including societal changes, technology overuse, remote work, and lack of genuine social connection.
  • Solutions: Individual actions like reaching out and mindful interactions help. Additionally, public health strategies like "social prescribing" and community initiatives are crucial.
  • Collective Effort Needed: Overcoming the epidemic requires collaboration across sectors, fostering stronger social connections within communities and digital spaces.

Monday, January 15, 2024

The man helping prevent suicide with Google adverts

Looi, M.-K. (2023).
BMJ.

Here are two excerpts:

Always online

A big challenge in suicide prevention is that people often experience suicidal crises at times when they’re away from clinical facilities, says Nick Allen, professor of psychology at the University of Oregon.

“It’s often in the middle of the night, so one of the great challenges is how can we be there for someone when they really need us, which is not necessarily when they’re engaged with clinical services.”

Telemedicine and other digital interventions came to prominence at the height of the pandemic, but “there’s an app for that” does not always match the patient in need at the right time. Says Onie, “The missing link is using existing infrastructure and habits to meet them where they are.”

Where they are is the internet. “When people are going through suicidal crises they often turn to the internet for information. And Google has the lion’s share of the search business at the moment,” says Allen, who studies digital mental health interventions (and has had grants from Google for his research).

Google’s core business stores information from searches, using it to fuel a highly effective advertising network in which companies pay to have links to their websites and products appear prominently in the “sponsored” sections at the top of all relevant search results.

The company holds 27.5% of the digital advertising market—earning the company around $224bn from search advertising alone in 2022.

If it knows enough about us to serve up relevant adverts, then it knows when a user is displaying red flag behaviour for suicide. Onie set out to harness this.

“It’s about the ‘attention economy,’” he says, “There’s so much information, there’s so much noise. How do we break through and make sure that the first thing that people see when they’re contemplating suicide is something that could be helpful?”

(cut)

At its peak the campaign was responding to over 6000 searches a day for each country. And the researchers saw a high level of response.

Typically, most advertising campaigns see low engagement in terms of clickthrough rates (the number of people that actually click on an advert when they see it). Industry benchmarks consider 3.17% a success. The Black Dog campaign saw 5.15% in Australia and 4.02% in the US. Preliminary data show Indonesia to be even higher—as much as 12%.

Because this is an advertising campaign, another measure is cost effectiveness. Google charges the advertiser per click on its advert, so the more engaged an audience is (and thus what Google considers to be a relevant advert to a relative user) the higher the charge. Black Dog’s campaign saw such a high number of users seeing the ads, and such high numbers of users clicking through, that the cost was below that of the industry average of $2.69 a click—specifically, $2.06 for the US campaign. Australia was higher than the industry average, but early data indicate Indonesia was delivering $0.86 a click.

-------
I could not find a free pdf.  The link above works, but is paywalled. Sorry. :(

Thursday, January 4, 2024

Americans’ Trust in Scientists, Positive Views of Science Continue to Decline

Brian Kennedy & Alec Tyson
Pew Research
Originally published 14 NOV 23

Impact of science on society

Overall, 57% of Americans say science has had a mostly positive effect on society. This share is down 8 percentage points since November 2021 and down 16 points since before the start of the coronavirus outbreak.

About a third (34%) now say the impact of science on society has been equally positive as negative. A small share (8%) think science has had a mostly negative impact on society.

Trust in scientists

When it comes to the standing of scientists, 73% of U.S. adults have a great deal or fair amount of confidence in scientists to act in the public’s best interests. But trust in scientists is 14 points lower than it was at the early stages of the pandemic.

The share expressing the strongest level of trust in scientists – saying they have a great deal of confidence in them – has fallen from 39% in 2020 to 23% today.

As trust in scientists has fallen, distrust has grown: Roughly a quarter of Americans (27%) now say they have not too much or no confidence in scientists to act in the public’s best interests, up from 12% in April 2020.

Ratings of medical scientists mirror the trend seen in ratings of scientists generally. Read Chapter 1 of the report for a detailed analysis of this data.

Differences between Republicans and Democrats in ratings of scientists and science

Declining levels of trust in scientists and medical scientists have been particularly pronounced among Republicans and Republican-leaning independents over the past several years. In fact, nearly four-in-ten Republicans (38%) now say they have not too much or no confidence at all in scientists to act in the public’s best interests. This share is up dramatically from the 14% of Republicans who held this view in April 2020. Much of this shift occurred during the first two years of the pandemic and has persisted in more recent surveys.


My take on why this important:

Science is a critical driver of progress. From technological advancements to medical breakthroughs, scientific discoveries have dramatically improved our lives. Without public trust in science, these advancements may slow or stall.

Science plays a vital role in addressing complex challenges. Climate change, pandemics, and other pressing issues demand evidence-based solutions. Undermining trust in science weakens our ability to respond effectively to these challenges.

Erosion of trust can have far-reaching consequences. It can fuel misinformation campaigns, hinder scientific collaboration, and ultimately undermine public health and well-being.

Monday, January 1, 2024

Cyborg computer with living brain organoid aces machine learning tests

Loz Blain
New Atlas
Originally posted 12 DEC 23

Here are two excerpts:

Now, Indiana University researchers have taken a slightly different approach by growing a brain "organoid" and mounting that on a silicon chip. The difference might seem academic, but by allowing the stem cells to self-organize into a three-dimensional structure, the researchers hypothesized that the resulting organoid might be significantly smarter, that the neurons might exhibit more "complexity, connectivity, neuroplasticity and neurogenesis" if they were allowed to arrange themselves more like the way they normally do.

So they grew themselves a little brain ball organoid, less than a nanometer in diameter, and they mounted it on a high-density multi-electrode array – a chip that's able to send electrical signals into the brain organoid, as well as reading electrical signals that come out due to neural activity.

They called it "Brainoware" – which they probably meant as something adjacent to hardware and software, but which sounds far too close to "BrainAware" for my sensitive tastes, and evokes the perpetual nightmare of one of these things becoming fully sentient and understanding its fate.

(cut)

And finally, much like the team at Cortical Labs, this team really has no clear idea what to do about the ethics of creating micro-brains out of human neurons and wiring them into living cyborg computers. “As the sophistication of these organoid systems increases, it is critical for the community to examine the myriad of neuroethical issues that surround biocomputing systems incorporating human neural tissue," wrote the team. "It may be decades before general biocomputing systems can be created, but this research is likely to generate foundational insights into the mechanisms of learning, neural development and the cognitive implications of neurodegenerative diseases."


Here is my summary:

There is a new type of computer chip that uses living brain cells. The brain cells are grown from human stem cells and are organized into a ball-like structure called an organoid. The organoid is mounted on a chip that can send electrical signals to the brain cells and read the electrical signals that the brain cells produce. The researchers found that the organoid could learn to perform tasks such as speech recognition and math prediction much faster than traditional computers. They believe that this new type of computer chip could have many applications, such as in artificial intelligence and medical research. However, there are also some ethical concerns about using living brain cells in computers.

Thursday, December 21, 2023

Chatbot therapy is risky. It’s also not useless

A.W. Ohlheiser
vox.com
Originally posted 14 Dec 23

Here is an excerpt:

So what are the risks of chatbot therapy?

There are some obvious concerns here: Privacy is a big one. That includes the handling of the training data used to make generative AI tools better at mimicking therapy as well as the privacy of the users who end up disclosing sensitive medical information to a chatbot while seeking help. There are also the biases built into many of these systems as they stand today, which often reflect and reinforce the larger systemic inequalities that already exist in society.

But the biggest risk of chatbot therapy — whether it’s poorly conceived or provided by software that was not designed for mental health — is that it could hurt people by not providing good support and care. Therapy is more than a chat transcript and a set of suggestions. Honos-Webb, who uses generative AI tools like ChatGPT to organize her thoughts while writing articles on ADHD but not for her practice as a therapist, noted that therapists pick up on a lot of cues and nuances that AI is not prepared to catch.

Stade, in her working paper, notes that while large language models have a “promising” capacity to conduct some of the skills needed for psychotherapy, there’s a difference between “simulating therapy skills” and “implementing them effectively.” She noted specific concerns around how these systems might handle complex cases, including those involving suicidal thoughts, substance abuse, or specific life events.

Honos-Webb gave the example of an older woman who recently developed an eating disorder. One level of treatment might focus specifically on that behavior: If someone isn’t eating, what might help them eat? But a good therapist will pick up on more of that. Over time, that therapist and patient might make the connection between recent life events: Maybe the patient’s husband recently retired. She’s angry because suddenly he’s home all the time, taking up her space.

“So much of therapy is being responsive to emerging context, what you’re seeing, what you’re noticing,” Honos-Webb explained. And the effectiveness of that work is directly tied to the developing relationship between therapist and patient.


Here is my take:

The promise of AI in mental health care dances on a delicate knife's edge. Chatbot therapy, with its alluring accessibility and anonymity, tempts us with a quick fix for the ever-growing burden of mental illness. Yet, as with any powerful tool, its potential can be both a balm and a poison, demanding a wise touch for its ethical wielding.

On the one hand, imagine a world where everyone, regardless of location or circumstance, can find a non-judgmental ear, a gentle guide through the labyrinth of their own minds. Chatbots, tireless and endlessly patient, could offer a first step of support, a bridge to human therapy when needed. In the hushed hours of isolation, they could remind us we're not alone, providing solace and fostering resilience.

But let us not be lulled into a false sense of ease. Technology, however sophisticated, lacks the warmth of human connection, the nuanced understanding of a shared gaze, the empathy that breathes life into words. We must remember that a chatbot can never replace the irreplaceable – the human relationship at the heart of genuine healing.

Therefore, our embrace of chatbot therapy must be tempered with prudence. We must ensure adequate safeguards, preventing them from masquerading as a panacea, neglecting the complex needs of human beings. Transparency is key – users must be aware of the limitations, of the algorithms whispering behind the chatbot's words. Above all, let us never sacrifice the sacred space of therapy for the cold efficiency of code.

Chatbot therapy can be a bridge, a stepping stone, but never the destination. Let us use technology with wisdom, acknowledging its potential good while holding fast to the irreplaceable value of human connection in the intricate tapestry of healing. Only then can we mental health professionals navigate the ethical tightrope and make technology safe and effective, when and where possible.

Tuesday, December 19, 2023

Human bias in algorithm design

Morewedge, C.K., Mullainathan, S., Naushan, H.F. et al.
Nat Hum Behav 7, 1822–1824 (2023).

Here is how the article starts:

Algorithms are designed to learn user preferences by observing user behaviour. This causes algorithms to fail to reflect user preferences when psychological biases affect user decision making. For algorithms to enhance social welfare, algorithm design needs to be psychologically informed.Many people believe that algorithms are failing to live up to their prom-ise to reflect user preferences and improve social welfare. The problem is not technological. Modern algorithms are sophisticated and accurate. Training algorithms on unrepresentative samples contributes to the problem, but failures happen even when algorithms are trained on the population. Nor is the problem caused only by the profit motive. For-profit firms design algorithms at a cost to users, but even non-profit organizations and governments fall short.

All algorithms are built on a psychological model of what the user is doing. The fundamental constraint on this model is the narrowness of the measurable variables for algorithms to predict. We suggest that algorithms fail to reflect user preferences and enhance their welfare because algorithms rely on revealed preferences to make predictions. Designers build algorithms with the erroneous assumption that user behaviour (revealed preferences) tells us (1) what users rationally prefer (normative preferences) and (2) what will enhance user welfare. Reliance on this 95-year-old economic model, rather than the more realistic assumption that users exhibit bounded rationality, leads designers to train algorithms on user behaviour. Revealed preferences can identify unknown preferences, but revealed preferences are an incomplete — and at times misleading — measure of the normative preferences and values of users. It is ironic that modern algorithms are built on an outmoded and indefensible commitment to revealed preferences.


Here is my summary.

Human biases can be reflected in algorithms, leading to unintended discriminatory outcomes. The authors argue that algorithms are not simply objective tools, but rather embody the values and assumptions of their creators. They highlight the importance of considering psychological factors when designing algorithms, as human behavior is often influenced by biases. To address this issue, the authors propose a framework for developing psychologically informed algorithms that can better capture user preferences and enhance social welfare. They emphasize the need for a more holistic approach to algorithm design that goes beyond technical considerations and takes into account the human element.

Friday, November 3, 2023

Posthumanism’s Revolt Against Responsibility

Nolen Gertz
Commonweal Magazine
Originally published 31 Oct 23

Here is an excerpt:

A major problem with this view—one Kirsch neglects—is that it conflates the destructiveness of particular humans with the destructiveness of humanity in general. Acknowledging that climate change is driven by human activity should not prevent us from identifying precisely which humans and activities are to blame. Plenty of people are concerned about climate change and have altered their behavior by, for example, using public transportation, recycling, or being more conscious about what they buy. Yet this individual behavior change is not sufficient because climate change is driven by the large-scale behavior of corporations and governments.

In other words, it is somewhat misleading to say we have entered the “Anthropocene” because anthropos is not as a whole to blame for climate change. Rather, in order to place the blame where it truly belongs, it would be more appropriate—as Jason W. Moore, Donna J. Haraway, and others have argued—to say we have entered the “Capitalocene.” Blaming humanity in general for climate change excuses those particular individuals and groups actually responsible. To put it another way, to see everyone as responsible is to see no one as responsible. Anthropocene antihumanism is thus a public-relations victory for the corporations and governments destroying the planet. They can maintain business as usual on the pretense that human nature itself is to blame for climate change and that there is little or nothing corporations or governments can or should do to stop it, since, after all, they’re only human.

Kirsch does not address these straightforward criticisms of Anthropocene antihumanism. This throws into doubt his claim that he is cataloguing their views to judge whether they are convincing and to explore their likely impact. Kirsch does briefly bring up the activist Greta Thunberg as a potential opponent of the nihilistic antihumanists, but he doesn’t consider her challenge in depth. 


Here is my summary:

Anthropocene antihumanism is a pessimistic view that sees humanity as a destructive force on the planet. It argues that humans have caused climate change, mass extinctions, and other environmental problems, and that we are ultimately incapable of living in harmony with nature. Some Anthropocene antihumanists believe that humanity should go extinct, while others believe that we should radically change our way of life in order to avoid destroying ourselves and the planet.

Some bullets
  • Posthumanism is a broad philosophical movement that challenges the traditional view of what it means to be human.
  • Anthropocene antihumanism and transhumanism are two strands of posthumanism that share a common theme of revolt against responsibility.
  • Anthropocene antihumanists believe that humanity is so destructive that it is beyond redemption, and that we should therefore either go extinct or give up our responsibility to manage the planet.
  • Transhumanists believe that we can transcend our human limitations and create a new, posthuman species that is not bound by the same moral and ethical constraints as humans.
  • Kirsch argues that this revolt against responsibility is a dangerous trend, and that we should instead work to create a more sustainable and just future for all.

Saturday, October 7, 2023

AI systems must not confuse users about their sentience or moral status

Schwitzgebel, E. (2023).
Patterns, 4(8), 100818.
https://doi.org/10.1016/j.patter.2023.100818 

The bigger picture

The draft European Union Artificial Intelligence Act highlights the seriousness with which policymakers and the public have begun to take issues in the ethics of artificial intelligence (AI). Scientists and engineers have been developing increasingly more sophisticated AI systems, with recent breakthroughs especially in large language models such as ChatGPT. Some scientists and engineers argue, or at least hope, that we are on the cusp of creating genuinely sentient AI systems, that is, systems capable of feeling genuine pain and pleasure. Ordinary users are increasingly growing attached to AI companions and might soon do so in much greater numbers. Before long, substantial numbers of people might come to regard some AI systems as deserving of at least some limited rights or moral standing, being targets of ethical concern for their own sake. Given high uncertainty both about the conditions under which an entity can be sentient and about the proper grounds of moral standing, we should expect to enter a period of dispute and confusion about the moral status of our most advanced and socially attractive machines.

Summary

One relatively neglected challenge in ethical artificial intelligence (AI) design is ensuring that AI systems invite a degree of emotional and moral concern appropriate to their moral standing. Although experts generally agree that current AI chatbots are not sentient to any meaningful degree, these systems can already provoke substantial attachment and sometimes intense emotional responses in users. Furthermore, rapid advances in AI technology could soon create AIs of plausibly debatable sentience and moral standing, at least by some relevant definitions. Morally confusing AI systems create unfortunate ethical dilemmas for the owners and users of those systems, since it is unclear how those systems ethically should be treated. I argue here that, to the extent possible, we should avoid creating AI systems whose sentience or moral standing is unclear and that AI systems should be designed so as to invite appropriate emotional responses in ordinary users.

My take

The article proposes two design policies for avoiding morally confusing AI systems. The first is to create systems that are clearly non-conscious artifacts. This means that the systems should be designed in a way that makes it clear to users that they are not sentient beings. The second policy is to create systems that are clearly deserving of moral consideration as sentient beings. This means that the systems should be designed to have the same moral status as humans or other animals.

The article concludes that the best way to avoid morally confusing AI systems is to err on the side of caution and create systems that are clearly non-conscious artifacts. This is because it is less risky to underestimate the sentience of an AI system than to overestimate it.

Here are some additional points from the article:
  • The scientific study of sentience is highly contentious, and there is no agreed-upon definition of what it means for an entity to be sentient.
  • Rapid advances in AI technology could soon create AI systems that are plausibly debatable as sentient.
  • Morally confusing AI systems create unfortunate ethical dilemmas for the owners and users of those systems, since it is unclear how those systems ethically should be treated.
  • The design of AI systems should be guided by ethical considerations, such as the need to avoid causing harm and the need to respect the dignity of all beings.

Tuesday, July 18, 2023

How AI is learning to read the human mind

Nicola Smith
The Telegraph
Originally posted 23 May 2023

Here is an excerpt:

‘Brain rights’

But he warned that it could also be weaponised and used for military applications or for nefarious purposes to extract information from people.

“We are on the brink of a crisis from the point of view of mental privacy,” he said. “Humans are defined by their thoughts and their mental processes and if you can access them then that should be the sanctuary.”

Prof Yuste has become so concerned about the ethical implications of advanced neurotechnology that he co-founded the NeuroRights Foundation to promote “brain rights” as a new form of human rights.

The group advocates for safeguards to prevent the decoding of a person’s brain activity without consent, for protection of a person’s identity and free will, and for the right to fair access to mental augmentation technology.

They are currently working with the United Nations to study how human rights treaties can be brought up to speed with rapid progress in neurosciences, and raising awareness of the issues in national parliaments.

In August, the Human Rights Council in Geneva will debate whether the issues around mental privacy should be covered by the International Covenant on Civil and Political Rights, one of the most significant human rights treaties in the world.

The gravity of the task was comparable to the development of the atomic bomb, when scientists working on atomic energy warned the UN of the need for regulation and an international control system of nuclear material to prevent the risk of a catastrophic war, said Prof Yuste.

As a result, the International Atomic Energy Agency (IAEA) was created and is now based in Vienna.

Saturday, June 17, 2023

Debt Collectors Want To Use AI Chatbots To Hustle People For Money

Corin Faife
vice.com
Originally posted 18 MAY 23

Here are two excerpts:

The prospect of automated AI systems making phone calls to distressed people adds another dystopian element to an industry that has long targeted poor and marginalized people. Debt collection and enforcement is far more likely to occur in Black communities than white ones, and research has shown that predatory debt and interest rates exacerbate poverty by keeping people trapped in a never-ending cycle. 

In recent years, borrowers in the US have been piling on debt. In the fourth quarter of 2022, household debt rose to a record $16.9 trillion according to the New York Federal Reserve, accompanied by an increase in delinquency rates on larger debt obligations like mortgages and auto loans. Outstanding credit card balances are at record levels, too. The pandemic generated a huge boom in online spending, and besides traditional credit cards, younger spenders were also hooked by fintech startups pushing new finance products, like the extremely popular “buy now, pay later” model of Klarna, Sezzle, Quadpay and the like.

So debt is mounting, and with interest rates up, more and more people are missing payments. That means more outstanding debts being passed on to collection, giving the industry a chance to sprinkle some AI onto the age-old process of prodding, coaxing, and pressuring people to pay up.

For an insight into how this works, we need look no further than the sales copy of companies that make debt collection software. Here, products are described in a mix of generic corp-speak and dystopian portent: SmartAction, another conversational AI product like Skit, has a debt collection offering that claims to help with “alleviating the negative feelings customers might experience with a human during an uncomfortable process”—because they’ll surely be more comfortable trying to negotiate payments with a robot instead. 

(cut)

“Striking the right balance between assertiveness and empathy is a significant challenge in debt collection,” the company writes in the blog post, which claims GPT-4 has the ability to be “firm and compassionate” with customers.

When algorithmic, dynamically optimized systems are applied to sensitive areas like credit and finance, there’s a real possibility that bias is being unknowingly introduced. A McKinsey report into digital collections strategies plainly suggests that AI can be used to identify and segment customers by risk profile—i.e. credit score plus whatever other data points the lender can factor in—and fine-tune contact techniques accordingly. 

Friday, June 16, 2023

ChatGPT Is a Plagiarism Machine

Joseph Keegin
The Chronicle
Originally posted 23 MAY 23

Here is an excerpt:

A meaningful education demands doing work for oneself and owning the product of one’s labor, good or bad. The passing off of someone else’s work as one’s own has always been one of the greatest threats to the educational enterprise. The transformation of institutions of higher education into institutions of higher credentialism means that for many students, the only thing dissuading them from plagiarism or exam-copying is the threat of punishment. One obviously hopes that, eventually, students become motivated less by fear of punishment than by a sense of responsibility for their own education. But if those in charge of the institutions of learning — the ones who are supposed to set an example and lay out the rules — can’t bring themselves to even talk about a major issue, let alone establish clear and reasonable guidelines for those facing it, how can students be expected to know what to do?

So to any deans, presidents, department chairs, or other administrators who happen to be reading this, here are some humble, nonexhaustive, first-aid-style recommendations. First, talk to your faculty — especially junior faculty, contingent faculty, and graduate-student lecturers and teaching assistants — about what student writing has looked like this past semester. Try to find ways to get honest perspectives from students, too; the ones actually doing the work are surely frustrated at their classmates’ laziness and dishonesty. Any meaningful response is going to demand knowing the scale of the problem, and the paper-graders know best what’s going on. Ask teachers what they’ve seen, what they’ve done to try to mitigate the possibility of AI plagiarism, and how well they think their strategies worked. Some departments may choose to take a more optimistic approach to AI chatbots, insisting they can be helpful as a student research tool if used right. It is worth figuring out where everyone stands on this question, and how best to align different perspectives and make allowances for divergent opinions while holding a firm line on the question of plagiarism.

Second, meet with your institution’s existing honor board (or whatever similar office you might have for enforcing the strictures of academic integrity) and devise a set of standards for identifying and responding to AI plagiarism. Consider simplifying the procedure for reporting academic-integrity issues; research AI-detection services and software, find one that works best for your institution, and make sure all paper-grading faculty have access and know how to use it.

Lastly, and perhaps most importantly, make it very, very clear to your student body — perhaps via a firmly worded statement — that AI-generated work submitted as original effort will be punished to the fullest extent of what your institution allows. Post the statement on your institution’s website and make it highly visible on the home page. Consider using this challenge as an opportunity to reassert the basic purpose of education: to develop the skills, to cultivate the virtues and habits of mind, and to acquire the knowledge necessary for leading a rich and meaningful human life.

Tuesday, June 13, 2023

Using the Veil of Ignorance to align AI systems with principles of justice

Weidinger, L. McKee, K.R., et al. (2023).
PNAS, 120(18), e2213709120

Abstract

The philosopher John Rawls proposed the Veil of Ignorance (VoI) as a thought experiment to identify fair principles for governing a society. Here, we apply the VoI to an important governance domain: artificial intelligence (AI). In five incentive-compatible studies (N = 2, 508), including two preregistered protocols, participants choose principles to govern an Artificial Intelligence (AI) assistant from behind the veil: that is, without knowledge of their own relative position in the group. Compared to participants who have this information, we find a consistent preference for a principle that instructs the AI assistant to prioritize the worst-off. Neither risk attitudes nor political preferences adequately explain these choices. Instead, they appear to be driven by elevated concerns about fairness: Without prompting, participants who reason behind the VoI more frequently explain their choice in terms of fairness, compared to those in the Control condition. Moreover, we find initial support for the ability of the VoI to elicit more robust preferences: In the studies presented here, the VoI increases the likelihood of participants continuing to endorse their initial choice in a subsequent round where they know how they will be affected by the AI intervention and have a self-interested motivation to change their mind. These results emerge in both a descriptive and an immersive game. Our findings suggest that the VoI may be a suitable mechanism for selecting distributive principles to govern AI.

Significance

The growing integration of Artificial Intelligence (AI) into society raises a critical question: How can principles be fairly selected to govern these systems? Across five studies, with a total of 2,508 participants, we use the Veil of Ignorance to select principles to align AI systems. Compared to participants who know their position, participants behind the veil more frequently choose, and endorse upon reflection, principles for AI that prioritize the worst-off. This pattern is driven by increased consideration of fairness, rather than by political orientation or attitudes to risk. Our findings suggest that the Veil of Ignorance may be a suitable process for selecting principles to govern real-world applications of AI.

From the Discussion section

What do these findings tell us about the selection of principles for AI in the real world? First, the effects we observe suggest that—even though the VoI was initially proposed as a mechanism to identify principles of justice to govern society—it can be meaningfully applied to the selection of governance principles for AI. Previous studies applied the VoI to the state, such that our results provide an extension of prior findings to the domain of AI. Second, the VoI mechanism demonstrates many of the qualities that we want from a real-world alignment procedure: It is an impartial process that recruits fairness-based reasoning rather than self-serving preferences. It also leads to choices that people continue to endorse across different contexts even where they face a self-interested motivation to change their mind. This is both functionally valuable in that aligning AI to stable preferences requires less frequent updating as preferences change, and morally significant, insofar as we judge stable reflectively endorsed preferences to be more authoritative than their nonreflectively endorsed counterparts. Third, neither principle choice nor subsequent endorsement appear to be particularly affected by political affiliation—indicating that the VoI may be a mechanism to reach agreement even between people with different political beliefs. Lastly, these findings provide some guidance about what the content of principles for AI, selected from behind a VoI, may look like: When situated behind the VoI, the majority of participants instructed the AI assistant to help those who were least advantaged.

Sunday, June 4, 2023

We need to examine the beliefs of today’s tech luminaries

Anjana Ahuja
Financial Times
Originally posted 10 MAY 23

People who are very rich or very clever, or both, sometimes believe weird things. Some of these beliefs are captured in the acronym Tescreal. The letters represent overlapping futuristic philosophies — bookended by transhumanism and longtermism — favoured by many of AI’s wealthiest and most prominent supporters.

The label, coined by a former Google ethicist and a philosopher, is beginning to circulate online and usefully explains why some tech figures would like to see the public gaze trained on fuzzy future problems such as existential risk, rather than on current liabilities such as algorithmic bias. A fraternity that is ultimately committed to nurturing AI for a posthuman future may care little for the social injustices committed by their errant infant today.

As well as transhumanism, which advocates for the technological and biological enhancement of humans, Tescreal encompasses extropianism, a belief that science and technology will bring about indefinite lifespan; singularitarianism, the idea that an artificial superintelligence will eventually surpass human intelligence; cosmism, a manifesto for curing death and spreading outwards into the cosmos; rationalism, the conviction that reason should be the supreme guiding principle for humanity; effective altruism, a social movement that calculates how to maximally benefit others; and longtermism, a radical form of utilitarianism which argues that we have moral responsibilities towards the people who are yet to exist, even at the expense of those who currently do.

(cut, and the ending)

Gebru, along with others, has described such talk as fear-mongering and marketing hype. Many will be tempted to dismiss her views — she was sacked from Google after raising concerns over energy use and social harms linked to large language models — as sour grapes, or an ideological rant. But that glosses over the motivations of those running the AI show, a dazzling corporate spectacle with a plot line that very few are able to confidently follow, let alone regulate.

Repeated talk of a possible techno-apocalypse not only sets up these tech glitterati as guardians of humanity, it also implies an inevitability in the path we are taking. And it distracts from the real harms racking up today, identified by academics such as Ruha Benjamin and Safiya Noble. Decision-making algorithms using biased data are deprioritising black patients for certain medical procedures, while generative AI is stealing human labour, propagating misinformation and putting jobs at risk.

Perhaps those are the plot twists we were not meant to notice.


Wednesday, May 24, 2023

Fighting for our cognitive liberty

Liz Mineo
The Harvard Gazette
Originally published 26 April 23

Imagine going to work and having your employer monitor your brainwaves to see whether you’re mentally tired or fully engaged in filling out that spreadsheet on April sales.

Nita Farahany, professor of law and philosophy at Duke Law School and author of “The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology,” says it’s already happening, and we all should be worried about it.

Farahany highlighted the promise and risks of neurotechnology in a conversation with Francis X. Shen, an associate professor in the Harvard Medical School Center for Bioethics and the MGH Department of Psychiatry, and an affiliated professor at Harvard Law School. The Monday webinar was co-sponsored by the Harvard Medical School Center for Bioethics, the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, and the Dana Foundation.

Farahany said the practice of tracking workers’ brains, once exclusively the stuff of science fiction, follows the natural evolution of personal technology, which has normalized the use of wearable devices that chronicle heartbeats, footsteps, and body temperatures. Sensors capable of detecting and decoding brain activity already have been embedded into everyday devices such as earbuds, headphones, watches, and wearable tattoos.

“Commodification of brain data has already begun,” she said. “Brain sensors are already being sold worldwide. It isn’t everybody who’s using them yet. When it becomes an everyday part of our everyday lives, that’s the moment at which you hope that the safeguards are already in place. That’s why I think now is the right moment to do so.”

Safeguards to protect people’s freedom of thought, privacy, and self-determination should be implemented now, said Farahany. Five thousand companies around the world are using SmartCap technologies to track workers’ fatigue levels, and many other companies are using other technologies to track focus, engagement and boredom in the workplace.

If protections are put in place, said Farahany, the story with neurotechnology could be different than the one Shoshana Zuboff warns of in her 2019 book, “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power.” In it Zuboff, Charles Edward Wilson Professor Emerita at Harvard Business School, examines the threat of the widescale corporate commodification of personal data in which predictions of our consumer activities are bought, sold, and used to modify behavior.

Sunday, May 21, 2023

Artificial intelligence, superefficiency and the end of work: a humanistic perspective on meaning in life

Knell, S., Rüther, M.
AI Ethics (2023).
https://doi.org/10.1007/s43681-023-00273-w

Abstract

How would it be assessed from an ethical point of view if human wage work were replaced by artificially intelligent systems (AI) in the course of an automation process? An answer to this question has been discussed above all under the aspects of individual well-being and social justice. Although these perspectives are important, in this article, we approach the question from a different perspective: that of leading a meaningful life, as understood in analytical ethics on the basis of the so-called meaning-in-life debate. Our thesis here is that a life without wage work loses specific sources of meaning, but can still be sufficiently meaningful in certain other ways. Our starting point is John Danaher’s claim that ubiquitous automation inevitably leads to an achievement gap. Although we share this diagnosis, we reject his provocative solution according to which game-like virtual realities could be an adequate substitute source of meaning. Subsequently, we outline our own systematic alternative which we regard as a decidedly humanistic perspective. It focuses both on different kinds of social work and on rather passive forms of being related to meaningful contents. Finally, we go into the limits and unresolved points of our argumentation as part of an outlook, but we also try to defend its fundamental persuasiveness against a potential objection.

From Concluding remarks

In this article, we explored the question of how we can find meaning in a post-work world. Our answer relies on a critique of John Danaher’s utopia of games and tries to stick to the humanistic idea, namely to the idea that we do not have to alter our human lifeform in an extensive way and also can keep up our orientation towards common ideals, such as working towards the good, the true and the beautiful.

Our proposal still has some shortcomings, which include the following two that we cannot deal with extensively but at least want to briefly comment on. First, we assumed that certain professional fields, especially in the meaning conferring area of the good, cannot be automated, so that the possibility of mini-jobs in these areas can be considered. This assumption is based on a substantial thesis from the philosophy of mind, namely that AI systems cannot develop consciousness and consequently also no genuine empathy. This assumption needs to be further elaborated, especially in view of some forecasts that even the altruistic and philanthropic professions are not immune to the automation of superefficient systems. Second, we have adopted without further critical discussion the premise of the hybrid standard model of a meaningful life according to which meaning conferring objective value is to be found in the three spheres of the true, the good, and the beautiful. We take this premise to be intuitively appealing, but a further elaboration of our argumentation would have to try to figure out, whether this trias is really exhaustive, and if so, due to which underlying more general principle. Third, the receptive side of finding meaning in the realm of the true and beautiful was emphasized and opposed to the active striving towards meaningful aims. Here, we have to more precisely clarify what axiological status reception has in contrast to active production—whether it is possibly meaning conferring to a comparable extent or whether it is actually just a less meaningful form. This is particularly important to be able to better assess the appeal of our proposal, which depends heavily on the attractiveness of the vita contemplativa.

Saturday, May 13, 2023

Doctors are drowning in paperwork. Some companies claim AI can help

Geoff Brumfiel
NPR.org - Health Shots
Originally posted 5 APR 23

Here are two excerpts:

But Paul kept getting pinged from younger doctors and medical students. They were using ChatGPT, and saying it was pretty good at answering clinical questions. Then the users of his software started asking about it.

In general, doctors should not be using ChatGPT by itself to practice medicine, warns Marc Succi, a doctor at Massachusetts General Hospital who has conducted evaluations of how the chatbot performs at diagnosing patients. When presented with hypothetical cases, he says, ChatGPT could produce a correct diagnosis accurately at close to the level of a third- or fourth-year medical student. Still, he adds, the program can also hallucinate findings and fabricate sources.

"I would express considerable caution using this in a clinical scenario for any reason, at the current stage," he says.

But Paul believed the underlying technology can be turned into a powerful engine for medicine. Paul and his colleagues have created a program called "Glass AI" based off of ChatGPT. A doctor tells the Glass AI chatbot about a patient, and it can suggest a list of possible diagnoses and a treatment plan. Rather than working from the raw ChatGPT information base, the Glass AI system uses a virtual medical textbook written by humans as its main source of facts – something Paul says makes the system safer and more reliable.

(cut)

Nabla, which he co-founded, is now testing a system that can, in real time, listen to a conversation between a doctor and a patient and provide a summary of what the two said to one another. Doctors inform their patients that the system is being used in advance, and as a privacy measure, it doesn't actually record the conversation.

"It shows a report, and then the doctor will validate with one click, and 99% of the time it's right and it works," he says.

The summary can be uploaded to a hospital records system, saving the doctor valuable time.

Other companies are pursuing a similar approach. In late March, Nuance Communications, a subsidiary of Microsoft, announced that it would be rolling out its own AI service designed to streamline note-taking using the latest version of ChatGPT, GPT-4. The company says it will showcase its software later this month.

Wednesday, April 19, 2023

Meaning in Life in AI Ethics—Some Trends and Perspectives

Nyholm, S., Rüther, M. 
Philos. Technol. 36, 20 (2023). 
https://doi.org/10.1007/s13347-023-00620-z

Abstract

In this paper, we discuss the relation between recent philosophical discussions about meaning in life (from authors like Susan Wolf, Thaddeus Metz, and others) and the ethics of artificial intelligence (AI). Our goal is twofold, namely, to argue that considering the axiological category of meaningfulness can enrich AI ethics, on the one hand, and to portray and evaluate the small, but growing literature that already exists on the relation between meaning in life and AI ethics, on the other hand. We start out our review by clarifying the basic assumptions of the meaning in life discourse and how it understands the term ‘meaningfulness’. After that, we offer five general arguments for relating philosophical questions about meaning in life to questions about the role of AI in human life. For example, we formulate a worry about a possible meaningfulness gap related to AI on analogy with the idea of responsibility gaps created by AI, a prominent topic within the AI ethics literature. We then consider three specific types of contributions that have been made in the AI ethics literature so far: contributions related to self-development, the future of work, and relationships. As we discuss those three topics, we highlight what has already been done, but we also point out gaps in the existing literature. We end with an outlook regarding where we think the discussion of this topic should go next.

(cut)

Meaning in Life in AI Ethics—Summary and Outlook

We have tried to show at least three things in this paper. First, we have noted that there is a growing debate on meaningfulness in some sub-areas of AI ethics, and particularly in relation to meaningful self-development, meaningful work, and meaningful relationships. Second, we have argued that this should come as no surprise. Philosophers working on meaning in life share the assumption that meaning in life is a partly autonomous value concept, which deserves ethical consideration. Moreover, as we argued in Section 4 above, there are at least five significant general arguments that can be formulated in support of the claim that questions of meaningfulness should play a prominent role in ethical discussions of newly emerging AI technologies. Third, we have also stressed that, although there is already some debate about AI and meaning in life, it does not mean that there is no further work to do. Rather, we think that the area of AI and its potential impacts on meaningfulness in life is a fruitful topic that philosophers have only begun to explore, where there is much room for additional in-depth discussions.

We will now close our discussion with three general remarks. The first is led by the observation that some of the main ethicists in the field have yet to explore their underlying meaning theory and its normative claims in a more nuanced way. This is not only a shortcoming on its own, but has some effect on how the field approaches issues. Are agency extension or moral abilities important for meaningful self-development? Should achievement gaps really play a central role in the discussion of meaningful work? And what about the many different aspects of meaningful relationships? These are only a few questions which can shed light on the presupposed underlying normative claims that are involved in the field. Here, further exploration at deeper levels could help us to see which things are important and which are not more clear, and finally in which directions the field should develop.